What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.
Imagining exactly how this tech will be put to use is tricky, but a group of researchers from the University of St Andrews in Scotland are exploring its limitations. In a paper published last month, they show how Project Soli hardware can be used for a range of precise sensing tasks. These including counting the number of playing cards in a deck, measuring compass orientation, and...
In an era where everything is controlled by touchscreens and oblique voice commands, there’s something incredibly satisfying about a gadget with simple, tactile controls. That’s probably why designer Chris Patty’s homemade jukebox looks so charming: it’s controlled by physical cards, each printed with an artist and album art on the front, that you swipe to play a song.
Patty created the jukebox as a Christmas gift for his father, after his family decided to only swap handmade presents this year. He later posted a short video of the creation to Twitter, where he’s received enough positive responses that he’s working on an open source version of the software and instructions so that fans can make their own.
Steppingstone VR thinks its new approach to VR locomotion might be the one to solve simulation sickness.
The company is working on a motion platform that uses electromagnetic propulsion to physically move players around as they stand/sit on a platform. You can see it in the early prototype video below; the platform gets its power supply from a specialized floor, a little like bumper cars, allowing it to quickly adapt and move in response to the player’s input in VR. The sensations of physically moving that the player feels should help to combat sickness in games with smooth locomotion such as Skyrim VR.
But this is just the first step (sorry) for Steppingstone VR. Over emails, CEO Samy Bensmida tells me that the consumer version of its product aims to include multiple moving platforms that users will be able to step onto. Tiles will move backward as you step onto them, in theory allowing you to physically walk around a massive game world without ever leaving the center of a space. You can see a similar concept in the video below, though Bensmida explains that this system uses wheels, whereas Steppingstone’s electromagnetic propulsion will allow it a greater deal of autonomy.
“You will walk all day long in Skyrim with your legs, no harness, and get all the congruent inertial cues,” Bensmida said.
And, yes, as expensive as it looks, Bensmida says the product is “100% consumer” with the aim of streamlining it to be viable for homes. Based on the prototype, there’s a lot of work to be done to get Steppingstone towards anywhere near something we’d consider making space for it and we’d still be concerned about the safety of navigating multiple moving platforms when essentially blindfolded in VR.
Still, Bensmida seems confident the team will pull it off, and is preparing a Kickstarter crowd-funding campaign to help it get there. It’s currently estimated to utilize a “consumer safe” 12V voltage and the campaign will likely run for around $150,000.
Would you put down electromagnetic flooring in your house if it meant complete and utter VR immersion?
Ahead of CES, Samsung is announcing upcoming refreshes of its two most stylish 4K TVs, The Frame and Serif. These are lifestyle pieces that aim to make people rethink what a TV can and should look like. They don’t offer Samsung’s best picture performance — that’s still reserved for the proper QLED lineup — but they’re definitely good for attracting conversation in the home.
The Frame is being upgraded with an improved picture over its previous two iterations. The 2019 model will feature Samsung’s quantum dot display technology for a wider HDR color palette. Aside from offering a better picture, The Frame will also now come in a new 49-inch size. (Last year’s edition came in 43-, 55-, and 65-inch sizes.) Samsung markets The Frame to...
Machine learning models are getting quite good at generating realistic human faces — so good that I may never trust a machine, or human, to be real ever again. The new approach, from researchers at Nvidia, leapfrogs others by separating levels of detail in the faces and allowing them to be tweaked separately. The results are eerily realistic.
The paper, published on preprint repository Arxiv (PDF), describes a new architecture for generating and blending images, particularly human faces, that “leads to better interpolation properties, and also better disentangles the latent factors of variation.”
What that means, basically, is that the system is more aware of meaningful variation between images, and at a variety of scales to boot. The researchers’ older system might, for example, produce two “distinct” faces that were mostly the same except the ears of one are erased and the shirt is a different color. That’s not really distinctiveness — but the system doesn’t know that those are not important pieces of the image to focus on.
It’s inspired by what’s called style transfer, in which the important stylistic aspects of, say, a painting, are extracted and applied to the creation of another image, which (if all goes well) ends up having a similar look. In this case, the “style” isn’t so much the brush strokes or color space, but the composition of the image (centered, looking left or right, etc.) and the physical characteristics of the face (skin tone, freckles, hair).
These features can have different scales, as well — at the fine side, it’s things like individual facial features; in the middle, it’s the general composition of the shot; at the largest scale, it’s things like overall coloration. Allowing the system to adjust all of them changes the whole image, while only adjusting a few might just change the color of someone’s hair, or just the presence of freckles or facial hair.
In the image at the top, notice how completely the faces change, yet obvious markers of both the “source” and “style” are obviously present, for instance the blue shirts in the bottom row. In other cases things are made up out of whole cloth, like the kimono the kid in the very center seems to be wearing. Where’d that come from? Note that all this is totally variable, not just a A + B = C, but with all aspects of A and B present or absent depending on how the settings are tweaked.
None of these are real people. But I wouldn’t look twice at most of these images if they were someone’s profile picture or the like. It’s kind of scary to think that we now have basically a face generator that can spit out perfectly normal looking humans all day long. Here are a few dozen:
It’s not perfect, but it works. And not just for people. Cars, cats, landscapes — all this stuff more or less fits the same paradigm of small, medium and large features that can be isolated and reproduced individually. An infinite cat generator sounds like a lot more fun to me, personally.
The researchers also have published a new data set of face data: 70,000 images of faces collected (with permission) from Flickr, aligned and cropped. They used Mechanical Turk to weed out statues, paintings and other outliers. Given the standard data set used by these types of projects is mostly red carpet photos of celebrities, this should provide a much more variable set of faces to work with. The data set will be available for others to download here soon.
Trello, the organizational tool owned by Atlassian, announced an acquisition of its very own this morning when it bought Butler for an undisclosed amount.
What Butler brings to Trello is the power of automation, stringing together a bunch of commands to make something complex happen automatically. As Trello’s Michael Pryor pointed out in a blog post announcing the acquisition, we are used to tools like IFTTT, Zapier and Apple Shortcuts, and this will bring a similar type of functionality directly into Trello.
“Over the years, teams have discovered that by automating processes on Trello boards with the Butler Power-Up, they could spend more time on important tasks and be more productive. Butler helps teams codify business rules and processes, taking something that might take ten steps to accomplish and automating it into one click,” Pryor wrote.
This means that Trello can be more than a static organizational tool. Instead, it can move into the realm of light-weight business process automation. For example, this could allow you to move an item from your To Do board to your Doing board automatically based on dates, or to share tasks with appropriate teams as a project moves through its life cycle, saving a bunch of manual steps that tend to add up.
The company indicated that it will be incorporating the Alfred’s capabilities directly into Trello in the coming months. It will make it available to all levels of users, including the free tier, but they promise more advanced functionality for Business and Enterprise customers when the integration is complete. Pryor also suggested that more automation could be coming to Trello. “Butler is Trello’s first step down this road, enabling every user to automate pieces of their Trello workflow to save time, stay organized and get more done.”
JD and Intel said today that they will set up a “lab” focused on bringing internet-of-things technology into the retail process. That could include new-generation vending machines, advertising experiences, and more.
That future is mostly offline — or, in China tech speak, ‘online-to-offline’ retail — but combining the benefits of e-commerce with brick and mortar physical retail shopping. Already, for example, customers can order ahead of time and come in store for collection, buy items without a checkout, take advantage of ‘smart shelves’ or simply try products in person before they buy them.
JD is backed by Chinese internet giant Tencent and valued at nearly $30 billion. The company already works with Intel on personalized shopping experiences, but this new lab is focused on taking things further with new projects and working to “facilitate their introduction to global markets.”
“The Digitized Retail Joint Lab will develop next-generation vending machines, media and advertising solutions, and technologies to be used in the stores of the future, based on Intel architecture,” the companies said in a joint announcement.
JD currently operates three 7Fresh stores in China but it is aiming to expand that network to 30. It has also forayed overseas, stepping into Southeast Asia with the launch of cashier-less stores in Indonesia this year.
Jibo was one of the best funded and most publicized social robots around. Despite that, it appears the company and robot are no more. An investment management firm in New York purchased the assets on June 20, 2018. According to The Robot Report:
“Social robot maker Jibo has sold its IP assets. According to a former Jibo executive with direct knowledge of the situation, New York-based investment management firm SQN Venture Partners is the new owner.”
Jibo Layoffs in June Preceded Asset Sell-off
Around that same time in June, BostInno reported that Jibo’s California office was marked as permanently closed and layoffs at the Boston Headquarters were “significant.” Co-founded in 2012 by MIT’s Cynthia Breazeal, the company raised $61 million from more than a dozen different investors including blue-chip VC firm Charles River Ventures. Jibo was valued at $200 million and had 95 employees as late as November 2016 according to Pitchbook.
The robot finally came to market in October 2017 but apparently did not establish enough momentum or favorable user reviews to overcome its $899 price point. It also apparently was delayed so much that Indigogo required the company to refund over $3 million in pre-orders before the robot came to market. MediaPost’s Chuck Martin summed up the situation succinctly last week saying:
“The closure comes as no surprise since the company laid off much of its workforce in June…In July, another social robot named Kuri was shut down. That device was from the Bosch Startup Platform and was an award winner launched at CES 2017. Social and home robots are getting better as robot makers continue their quest to create a robot that consumers not only want but also will pay for.”
It Couldn’t Do Much More Than a Headless Robot, i.e. A Smart Speaker
There were many aspects of Jibo’s personality that made it unique. Its physical world interaction was compelling, but design doesn’t always win. Sometimes great design can’t overcome a high price point, limited utility, and consumers not quite sure why they need the product. While Jibo was trying to find its way, the founder’s vision was being realized in a less sophisticated way by headless robots, also known as smart speakers. For $899, Jibo could tell you the news and weather, set a timer, and play music or tell you a joke all in response to natural language interaction. For less than $30 this week, Amazon Echo Dot and Google Home Mini can perform those tasks and much more.
Smart speakers have brought voice accessible media, information, entertainment, and other utilities into the home for a nominal cost and you don’t have to worry about the devices accidentally rolling down the stairs. In some ways, smart speakers are paving the way for social robots by training consumers in new behaviors that involve voice interaction with computers. However, these devices are also delivering the low hanging fruit of benefits that many social robots sought to provide. This means social robots have to deliver clearly differentiated and meaningfully beneficial value beyond what smart speaker-based voice assistants offer today. Those use cases surely exist and we have seen several interesting robot applications in business settings. The unanswered question is what benefits will spur consumers to want and pay for social robots.
The next few years will see voice automation take over many aspects of our lives. Although voice won’t change everything, it will be part of a movement that heralds a new way to think about our relationship with devices, screens, our data and interactions.
We will become more task-specific and less program-oriented. We will think less about items and more about the collective experience of the device ecosystem they are part of. We will enjoy the experiences they make possible, not the specifications they celebrate.
In the new world I hope we relinquish our role from the slaves we are today to be being back in control.
Voice won’t kill anything
The standard way that technology arrives is to augment more than replace. TV didn’t kill the radio. VHS and then streamed movies didn’t kill the cinema. The microwave didn’t destroy the cooker.
Voice more than anything else is a way for people to get outputs from and give inputs into machines; it is a type of user interface. With UI design we’ve had the era of punch cards in the 1940s, keyboards from the 1960s, the computer mouse from the 1970s and the touchscreen from the 2000s.
All four of these mechanisms are around today and, with the exception of the punch card, we freely move between all input types based on context. Touchscreens are terrible in cars and on gym equipment, but they are great at making tactile applications. Computer mice are great to point and click. Each input does very different things brilliantly and badly. We have learned to know what is the best use for each.
Voice will not kill brands, it won’t hurt keyboard sales or touchscreen devices — it will become an additional way to do stuff; it is incremental, not cannibalistic.
We need to design around it
Nobody wanted the computer mouse before it was invented. In fact, many were perplexed by it because it made no sense in the previous era, where we used command lines, not visual icons, to navigate. Working with Nokia on touchscreens before the iPhone, the user experience sucked because the operating system wasn’t designed for touch. 3D Touch still remains pathetic because few software designers got excited by it and built for it.
What is exciting about voice is not using ways to add voice interaction to current systems, but considering new applications/interactions/use cases we’ve never seen.
At the moment, the burden is on us to fit around the limitations of voice, rather than have voice work around our needs.
A great new facade
Have you ever noticed that most company desktop websites are their worst digital interface; their mobile site is likely better and the mobile app will be best. Most airline or hotel or bank apps don’t offer pared-down experiences (like was once the case), but their very fastest, slickest experience with the greatest functionality. What tends to happen is that new things get new cap ex, the best people and the most ability to bring change.
However, most digital interfaces are still designed around the silos, workflows and structures of the company that made them. Banks may offer eight different ways to send money to someone or something based around their departments; hotel chains may ask you to navigate by their brand of hotel, not by location.
The reality is that people are task-oriented, not process-oriented. They want an outcome and don’t care how. Do I give a crap if it’s Amazon Grocery or Amazon Fresh or Amazon Marketplace? Not one bit. Voice allows companies to build a new interface on top of the legacy crap they’ve inherited. I get to “send money to Jane today,” not press 10 buttons around their org chart.
It requires rethinking
The first time I showed my parents a mouse and told them to double-click on it I thought they were having a fit on it. The cursor would move in jerks and often get lost. The same dismay and disdain I once had for them, I now feel every time I try to use voice. I have to reprogram my brain to think about information in a new way and to reconsider how my brain works. While this will happen, it will take time.
What gets interesting is what happens to the 8-year-olds who grow up thinking of voice first, what happens when developing nations embrace tablets with voice not desktop PCs to educate. When people grow up with something, their native understanding of what it means and what it makes possible changes. It’s going to be fascinating to see what becomes of this canvas.
Voice as a connective layer
We keep being dumb and thinking of voice as being the way to interact with “a” machine and not as a glue between all machines. Voice is an inherently crap way to get outputs; if a picture states a thousand words, how long will it take to buy a t-shirt. The real value of voice is as a user interface across all devices. Advertising in magazines should offer voice commands to find out more. You should be able to yell at the Netflix carousel, or at TV ads to add products to your shopping list. Voice won’t be how we “do” entire things, it will be how we trigger or finish things.
We’ve only ever assumed we talked to devices first. Do I really want to remember the command for turning on lights in the home and utter six words to make it happen? Do I want to always be asking. Assuming devices are select in when they speak first, it’s fun to see what happens when voice is proactive. Imagine the possibilities:
“Welcome home, would you like me to select evening lighting?”
“You’re running late for a meeting, should I order an Uber to take you there?”
“Your normal Citi Bike station has no bikes right now.”
“While it looks sunny now, it’s going to rain later.”
While many think we don’t want to share personal information, there are ample signs that if we get something in return, we trust the company and there is transparency, it’s OK. Voice will not develop alone, it will progress alongside Google suggesting emails replies, Amazon suggesting things to buy, Siri contextually suggesting apps to use. We will slowly become used to the idea of outsourcing our thinking and decisions somewhat to machines.
We’ve already outsourced a lot; we can’t remember phone numbers, addresses, birthdays — we even rely on images to jar our recollection of experiences, so it’s natural we’ll outsource some decisions.
The medium-term future in my eyes is one where we allow more data to be used to automate the mundane. Many think that voice is asking Alexa to order Duracell batteries, but it’s more likely to be never thinking about batteries or laundry detergent or other low consideration items again nor the subscriptions to be replenished.
There is an expression that a computer should never ask a question for which it can reasonably deduce the answer itself. When a technology is really here we don’t see, notice or think about it. The next few years will see voice automation take over many more aspects of our lives. The future of voice may be some long sentences and some smart commands, but mostly perhaps it’s simply grunts of yes.
Walking the thin line between naughty and nice is easier than ever when you show up dressed in this naughty/nice list sweater. The eye-catching design lets you flip-flop between the naughty and nice list depending on your current mood.
I drove 125 miles to K1 Speed in the Los Angeles area coasting at 70 miles per hour most of the way. Now I’m looking at one of K1’s karts on a real-world race track. The seat is low to the ground and I sit down, stretching out my legs on either side of the vehicle and wondering if traditional driving experience will translate.
The kart features a temporary rigging to attach a computer and Oculus Rift VR headset. The speed of the kart is remotely adjustable by the system Master of Shapes is demonstrating. As part of this rigging, lights effectively broadcast the kart’s position to cameras overhead spanning the length of the winding track. There’s even a button on the wheel that could deliver one of the world’s first mixed reality versions of something like Mario Kart.
Sure, it is amazing to wear a VR headset so you can sit in Mushroom Kingdom while seated on a real-world motion platform. But that’s a different caliber of experience from the one I’m testing, which will move my body through the real world in an accurate feedback loop with the way I push the pedals and turn the wheel. It is similar to the “mixed reality” experience we saw in the Oculus Arena at the most recent Oculus Connect VR developer’s conference, which incorporated real-world mapping. Except this time I’ll be moving through real space in a vehicle under my control.
Which brings me back to that button on the wheel — the one that “could deliver one of the world’s first mixed reality versions of something like Mario Kart.” Representatives from Master of Shapes told me not to push the button. They were explicit about it before I got in the kart. The button was intended entirely for development purposes at the moment I sat down.
One day there could be races here at K1 where a kid too young to drive a kart on their own could grab a gamepad and log into the same race as their elder sibling out on the actual “speedway.” One day that button on the wheel could launch a virtual weapon to slow down another player’s kart.
I press down on the pedal and…
Not long after the video above ends there’s a hard left turn and, in my growing confidence blindfolded to the real world, I move my hands into a new position. I should remind you again they told me not to push the button. In fact, they even warned me what would happen if I did. The virtual world would rotate 90 degrees off the physical barriers of the real world.
“Oh ok,” I thought at the time. “That’s bad. Don’t touch the button. Now let me drive the thing.”
So I’m hurtling around that corner and suddenly the world snaps into a new position. In front of my eyes now, directly ahead, is the railing of the virtual track. I panic and can’t remember which foot to use to brake the kart.
Instead, I brace and hope for the best.
I seem to be fine for a few seconds and then BAM!
I took the Rift off and laughed. They told me how to put the kart in reverse and we wheeled it back to the starting line for a reset. The second time, I went slow for the first lap and then really pressed the pedal down for the second one. It all worked fine for a few laps as I came back to where I started in the real world.
At the 2018 IDTechEx Show! in Santa Clara, AsReader, Inc. showcases a variety of hardware consisting of RFID Reader/Writers, 1D and 2D Barcode Scanners and an all-new medical grade battery/wireless charging-sled with case. From a pocket-sized AsReader Barcode Scanner to the 10m/32ft long-distance GUN-Type RFID Reader and/or Barcode Scanner, AsReader hardware is compatible with most iOS devices including: iPhone 8 Plus/7Plus/6sPlus/6Plus, iPhone 8/7/6s/6, iPhone SE/5s/5, iPod touch 6th/5th Generation and iPad mini 3/2/1. AsReader’s handheld sleds are available with a white or black case for tracking logistics, healthcare patients & medications, retail inventory cycle-counts &markdowns, and event management. For standard barcode scanner, a UHF RFID Reader/Writer or an HF/NFC Reader, come with a royalty-free SDK with APIs to get connect with other software. AsReader also takes orders for Android users with a small MoQ.
Google wants to help you take a closer look at the art world.
The company’s Arts & Culture app has long been one of the company’s cooler niche apps and one that I often feel guilty about overlooking every time I rediscover it. Today, the company has added another experience into the mix focused on collecting the known works of Dutch master artist Johannes Vermeer and curating them in a single place.
The feature looks a lot like many of the company’s other deep dives, including listicles of factoids, interviews with experts and editorials. What makes this presentation unique is that the company actually constructed a miniature 3D art gallery that can utilize your phone’s AR functionality to plop into physical space in front of you.
With ARCore or ARKit, you can move through the “Pocket Gallery” and get close to the high-resolution captures of the paintings while also bringing up information about the works.
Having just tried it, this is one of those things that honestly doesn’t make a ton of sense to do with phone AR. Having a fully rendered gallery pop on your coffee table is an interesting gimmick, but they probably could have ditched the AR for a fully rendered 3D environment that’s more of a traversable object or just left the immersive views for VR and stuck with 2D exploration on your phone.
Nevertheless, it all makes for some interesting experimentation, and it’s just cool to see Google trying out new things with experiencing digital art in a more immersive way. Google’s Arts & Culture app is available on iOS and Android.
Andy Kangpan is an investor at Two Sigma Ventures, where he helps find, fund, and support early-stage technology companies.
Virtual reality is in a public relations slump. Two years ago the public’s expectations for virtual reality’s potential was at its peak. Many believed (and still continue to believe) that VR would transform the way we connect, interact and communicate in our personal and professional lives.
Google Trends highlighting search trends related to virtual reality over time; the “Note” refers to an improvement in Google’s data collection system that occurred in early 2016
It’s easy to understand why this excitement exists once you put on a head-mounted display. While there are still a limited number of compelling experiences, after you test some of the early successes in the field, it’s hard not to extrapolate beyond the current state of affairs to a magnificent future where the utility of virtual reality technology is pervasive.
However, many problems still exist. The all-in cost for state of the art headsets is still out of reach for the mass market. Most “high-quality” virtual reality experiences still require users to be tethered to their desktops. The setup experience for mass market users is lathered in friction. When it comes down to it, the holistic VR experience is a non-starter for most people. We are effectively in what Gartner refers to as the “trough of disillusionment.”
Gartner’s hype cycle for “Human-Machine Interface” in 2018 places many related VR related fields (e.g. mixed reality, AR, HMDs, etc.) in the “Trough of Disillusionment”
Yet, the virtual reality market has continued its slow march to mass adoption, and there are tangible indicators that suggest we could be nearing an inflection point.
A shift toward sustainable hardware growth
What you do and do not consider a virtual reality display can dramatically impact your view on the state of the VR hardware industry. Head-mounted displays (HMDs) can be categorized in three different ways:
Screenless viewers — affordable devices that turn smartphones into a VR experience (e.g. Google Glass, Samsung Gear VR, etc.)
Standalone HMDs — devices that are not connected to a computer and can independently run content (e.g. Oculus Go, Lenovo Mirage Solo, etc.)
Tethered HMDs — devices that are connected to a desktop computer in order to run content (e.g. HTC Vive, Oculus Pro, etc.)
2018 has seen disappointing progress in aggregate headset growth. The overall market is forecasted to ship 8.9 million headsets in 2018, up from an approximate aggregate shipment of ~8.3 million in 2017, according to IDC. On the surface, those numbers hardly describe a market at its inflection point.
However, most of the decline in growth rate can be attributed to two factors. First, screenless viewers have seen a significant decline in shipments as device manufacturers have stopped shipping them alongside smartphones. In the second quarter of 2018, 409,000 screenless viewers were shipped compared to approximately 1 million in the second quarter of 2017. Second, tethered VR headsets have also declined as manufacturers have slowed down the pricing discounts that acted as a steroid to sales growth in 2017.
Looking at the market for standalone HMDs, however, reveals a more promising figure. Standalone VR headsets grew 417 percent due to the global availability of the Oculus Go and Xiaomi Mi VR. Over time, these headsets are going to be the driver of the VR market as they offer significant advantages compared to tethered headsets.
The shift from tethered to standalone VR headsets is significant. It represents a paradigm shift within the immersive ecosystem, where developers have a truly mobile platform that is powerful enough to enable compelling user experiences.
IDC forecasts for AR/VR headset market share by form factor, 2018–2022
A premium market segment
There are a few names that come to mind when thinking about products that are available for purchase in the VR market: Samsung, Facebook (Oculus), HTC and PlayStation. A plethora of new products from these marquee names — and products from new companies entering the market — are opening the category for a new customer segment.
For the past few years, the market effectively had two segments. The first was a “mass market” segment with notorious devices such as the Google Cardboard and the Samsung Gear, which typically sold for less than $100 and offered severely constrained experiences to consumers. The second segment was a “pro market,” with a few notable devices, such as the HTC Vive, that required absurdly powerful computing rigs to operate, but offered consumers more compelling, immersive experiences.
It’s possible that this new emerging segment will dramatically open up the total addressable VR market. This “premium” market segment offers product alternatives that are somewhat more expensive than the mass market, but are significantly differentiated in the potential experiences that can be offered (and with much less friction than the “pro market”).
The Oculus Go, the Xiaomi Mi VR and the Lenovo Solo are the most notable products in this segment. They are the fastest growing devices in this segment, and represent a new wave of products that will continue to roll out. This segment could be the tipping point for when we move from the early adopters to the early majority in the VR product adoption curve.
A number of other products that fall into this category have also been released throughout 2018, such as Lenovo’s Mirage Solo and Xiaomi’s Mi VR. Even more so, Oculus recently announced that they’ll be shipping a new headset called Quest this spring, which will sell for $399 and will be the most powerful example of a premium device to date. The all-in price range of ~$200–400 places these devices in a segment consumers are already conditioned to pay (think iPad’s, gaming consoles, etc.), and they offer differentiated experiences primarily attributed to the fact that they are standalone devices.
Plus de 300 millions de clients du groupe hôtelier Marriott impactés par le piratage informatique de la multinationale. Et vous êtes encore étonnés ?! Depuis bientôt 25 ans que ZATAZ existe (20 ans sous la forme de zataz.com) je vous relate, alerte, explique les milliers de fuites de données que je ...
Augmented reality is a very buzzy space, but the fundamental technologies underpinning it are pushing boundaries across a lot of other verticals. Tech like machine learning, object recognition and visual mapping tech are the pillars of plenty of new ventures, enabling there to be companies that thrive in the overlap.
Phiar (pronounced fire) is building an augmented reality navigation app for drivers, but the same tech it’s built to help drivers easily pinpoint where they need to make their next turn also helps them build up rich mapping data that can give partners like autonomous car startups the high-quality data they so deeply need.
The SF-based company has just closed a $3 million seed deal led by Norwest Venture Partners and The Venture Reality Fund. Other investors include Anorak Ventures, Mayfield Fund, Zeno Ventures, Cross Culture Ventures, GFR Fund, Y Combinator, Innolinks Ventures and Half Court Ventures.
While phone and headset-based AR have received a lot of the broader media attention, the automotive industry is a central focus for a lot of augmented reality startups attracted by the proposition of a mobile environment that can showcase and integrate bulky tech. There have certainly been quite a few heads-up display startups looking to take advantage of a car’s windshield real estate, and prior to joining Y Combinator, Phiar was actually looking to build some of this hardware themselves before deciding on a more software-focused route for the company.
Unlike a lot of phone AR apps built on top of Apple or Google’s developers platforms, Phiar’s use case doesn’t quite work with the limitations of these systems which understandably weren’t built with the idea a user would be moving at 60 miles per hour. As a result the company has had to build tech to greater understand the geometry of a quickly updating world through a single camera while ensuring that it’s not just some ugly directional overlay, using techniques like real-time occlusion to ensure that the digital and physical worlds interact nicely.
While the startup’s big consumer-facing play is the free AR mobile app, Phiar is really just an augmented reality company on the surface, its real sell is what it can do with the data and insights gathered from an always-on dash camera. The same object recognition tech that will allow the app to seamlessly toss AR animations onto the scene in front of you is also analyzing that environment and uploading metadata to build up its mapping insights.
In addition, the app saves up to 30 minutes of footage from each ride, offering users the utility of a free dash cam in case they get in an accident and need video for an insurance claim, while providing some rich anonymized data for the company to build up high quality mapping data it can sell to partners.
This kind of data is incredibly useful to companies building autonomous car tech, ride sharing companies and a lot of entities that are interested in access to quickly-updating map data. The challenge for Phiar will be building up enough users so that their map data is as rich as their partners will demand.
CEO Chen-Ping Yu says that the startup is in talks with partners in the automative space to integrate their tech and is also working to bring what they’ve built to companies in the ride-sharing space. Yu says the company plans to release their consumer app in mid-2019.
AWS announced a new time series database today at AWS re:Invent in Las Vegas. The new product called DynamoDB On-Demand is a fully managed database designed to track items over time, which can be particularly useful for Internet of Things scenarios.
“With time series data each data point consists of a timestamp and one or more attributes and it really measures how things change over time and helps drive real time decisions,” AWS CEO Andy Jassy explained.
He sees a problem though with existing open source and commercial solutions, which says don’t scale well and hard to manage. This is of course a problem that a cloud service like AWS often helps solve.
Not surprising as customers were looking for a good time series database solution, AWS decided to create one themselves. “Today we are introducing Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning,” Danilo Poccia from AWS wrote in the blog post introducing the new service.
Jassy said that they built DynamoDB on-demand from the ground up with an architecture that organizes data by time intervals and enables time series specific data compression, which leads to less scanning and faster performance.
He claims it will be a thousand times faster at a tenth of cost, and of course it scales up and down as required and includes all of the analytics capabilities you need to understand all of the data you are tracking.
This new service is available across the world starting today.
Amazon last year dismissed the idea of getting into the blockchain with AWS, but today that’s changed. The company announced a new service called Amazon Quantum Ledger Database or QLDB, which is a fully managed ledger database with a central trusted authority. The service, which is launching into preview today, offers an append-only, immutable journal that tracks the history of all changes, Amazon said.
And all the changes are cryptographically chained and verifiable.
The company announced the product on stage today at AWS:ReInvent, noting QLDB’s other features – including its transparent nature, ability to automatically scale up or down as needed, ease of use, and speed. The database can execute two to three times more transactions, Amazon claimed, compared with existing products.
“It will be really scalable, you’ll have a much more flexible and robust set of APIs for you to make any kind of changes or adjustments to the ledger database,” said Andy Jassy, AWS CEO, in describing the new QLDB offering.
On the QLDB website, Amazon explains the new database in more depth:
Amazon QLDB is a new class of database that eliminates the need to engage in the complex development effort of building your own ledger-like applications. With QLDB, your data’s change history is immutable – it cannot be altered or deleted – and using cryptography, you can easily verify that there have been no unintended modifications to your application’s data. QLDB uses an immutable transactional log, known as a journal, that tracks each application data change and maintains a complete and verifiable history of changes over time. QLDB is easy to use because it provides developers with a familiar SQL-like API, a flexible document data model, and full support for transactions. QLDB is also serverless, so it automatically scales to support the demands of your application. There are no servers to manage and no read or write limits to configure. With QLDB, you only pay for what you use.
“Amazon Managed Blockchain is a fully managed service that allows you to set up and manage a scalable blockchain network with just a few clicks,” Amazon said in an announcement. The product eliminates the overhead required to create the network and automatically scales to meet the demands of thousands of applications running millions of transactions, it said.
It also manages your certificates, lets you easily invite new members to join the network, and tracks operational metrics such as usage of compute, memory, and storage resources.
Managed Blockchain is able to replicate an immutable copy of your blockchain network activity into Amazon Quantum Ledger Database (QLDB), which lets you analyze the network activity outside the network and gain insights into trends.
Interested customers can sign up for Amazon Managed Blockchain preview here.
And those with applications that need an immutable and verifiable ledger database can try out Amazon QLDB here.
The United States Army awarded a $480 million contract to Microsoft that will equip military personnel with prototype versions of HoloLens intended to increase “lethality, mobility, and situational awareness.”
HoloLens is an all-in-one augmented reality headset from Microsoft which first shipped in 2016 for $3,000. Its robust tracking system constantly maps the world while overlaying digital objects into the central area of its wearer’s vision. While HoloLens isn’t great for immersive games like Rift, Vive or PSVR headsets that take you to another world, its wireless design, high quality tracking of the real world and high price mean the system is ideal for entirely different use cases. As seen above, it’s been used on the International Space Station and NASA used it to visualize rovers long before they make the trip to Mars. A few developers have also tried carving out a niche building on the headset by delivering applications to companies for internal use.
With the U.S. Army and its $480 million award for an “Integrated Visual Augmentation System,” the plan is to procure “approximately two thousand five hundred & fifty IVAS prototypes (to include hardware, software, and the associated interface control documentation) in four increments or “capability sets”.
“Augmented reality technology will provide troops with more and better information to make decisions,” a statement from Microsoft reads. “This new work extends our longstanding, trusted relationship with the Department of Defense to this new area.”
A report from Bloomberg suggests the award could eventually lead the military to purchase more than 100,000 headsets from MIcrosoft.
You can check out some documents on the Federal Business Opportunities website which outline the overall aims of the Army program. I’ve uploaded the “Statement of Objectives” which was posted in August, which outlines the scope and aim of the program.
“Current and future battles will be fought with small distributed formations in urban and subterranean environments where current capabilities are not sufficient, a recognized training capability gap the Government has sought to fill since 2009. The IVAS will address this shortfall by providing increased sets and repetitions in complex environments,” the document reads. “Soldier lethality will be vastly improved through cognitive training and advanced sensors, enabling squads to be first to detect, decide, and engage.”
Walking into the office of Viktor Prokopenya — which overlooks a central London park — you would perhaps be forgiven for missing the significance of this unassuming location, just south of Victoria Station in London. While giant firms battle globally to make augmented reality a “real industry,” this jovial businessman from Belarus is poised to launch a revolutionary new technology for just this space. This is the kind of technology some of the biggest companies in the world are snapping up right now, and yet, scuttling off to make me a coffee in the kitchen is someone who could be sitting on just such a company.
Regardless of whether its immediate future is obvious or not, AR has a future if the amount of investment pouring into the space is anything to go by.
In 2016 AR and VR attracted $2.3 billion worth of investments (a 300 percent jump from 2015) and is expected to reach $108 billion by 2021 — 25 percent of which will be aimed at the AR sector. But, according to numerous forecasts, AR will overtake VR in 5-10 years.
Apple is clearly making headway in its AR developments, having recently acquired AR lens company Akonia Holographics and in releasing iOS 12 this month, it enables developers to fully utilize ARKit 2, no doubt prompting the release of a new wave of camera-centric apps. This year Sequoia Capital China, SoftBank invested $50 million in AR camera app Snow. Samsung recently introduced its version of the AR cloud and a partnership with Wacom that turns Samsung’s S-Pen into an augmented reality magic wand.
The IBM/Unity partnership allows developers to integrate into their Unity applications Watson cloud services such as visual recognition, speech to text and more.
So there is no question that AR is becoming increasingly important, given the sheer amount of funding and M&A activity.
Joining the field is Prokopenya’s “Banuba” project. For although you can download a Snapchat-like app called “Banuba” from the App Store right now, underlying this is a suite of tools of which Prokopenya is the founding investor, and who is working closely to realize a very big vision with the founding team of AI/AR experts behind it.
The key to Banuba’s pitch is the idea that its technology could equip not only apps but even hardware devices with “vision.” This is a perfect marriage of both AI and AR. What if, for instance, Amazon’s Alexa couldn’t just hear you? What if it could see you and interpret your facial expressions or perhaps even your mood? That’s the tantalizing strategy at the heart of this growing company.
Better known for its consumer apps, which have been effectively testing their concepts in the consumer field for the last year, Banuba is about to move heavily into the world of developer tools with the release of its new Banuba 3.0 mobile SDK. (Available to download now in the App Store for iOS devices and Google Play Store for Android.) It’s also now secured a further $7 million in funding from Larnabel Ventures, the fund of Russian entrepreneur Said Gutseriev, and Prokopenya’s VP Capital.
This move will take its total funding to $12 million. In the world of AR, this is like a Romulan warbird de-cloaking in a scene from Star Trek.
Banuba hopes that its SDK will enable brands and apps to utilise 3D Face AR inside their own apps, meaning users can benefit from cutting-edge face motion tracking, facial analysis, skin smoothing and tone adjustment. Banuba’s SDK also enables app developers to utilise background subtraction, which is similar to “green screen” technology regularly used in movies and TV shows, enabling end-users to create a range of AR scenarios. Thus, like magic, you can remove that unsightly office surrounding and place yourself on a beach in the Bahamas…
Because Banuba’s technology equips devices with “vision,” meaning they can “see” human faces in 3D and extract meaningful subject analysis based on neural networks, including age and gender, it can do things that other apps just cannot do. It can even monitor your heart rate via spectral analysis of the time-varying color tones in your face.
It has already been incorporated into an app called Facemetrix, which can track a child’s eyes to ascertain whether they are reading something on a phone or tablet or not. Thanks to this technology, it is possible to not just “track” a person’s gaze, but also to control a smartphone’s function with a gaze. To that end, the SDK can detect micro-movements of the eye with subpixel accuracy in real time, and also detects certain points of the eye. The idea behind this is to “Gamify education,” rewarding a child with games and entertainment apps if the Facemetrix app has duly checked that they really did read the e-book they told their parents they’d read.
If that makes you think of a parallel with a certain Black Mirror episode where a young girl is prevented from seeing certain things via a brain implant, then you wouldn’t be a million miles away. At least this is a more benign version…
Banuba’s SDK also includes “Avatar AR,” empowering developers to get creative with digital communication by giving users the ability to interact with — and create personalized — avatars using any iOS or Android device.Prokopenya says: “We are in the midst of a critical transformation between our existing smartphones and future of AR devices, such as advanced glasses and lenses. Camera-centric apps have never been more important because of this.” He says that while developers using ARKit and ARCore are able to build experiences primarily for top-of-the-range smartphones, Banuba’s SDK can work on even low-range smartphones.
The SDK will also feature Avatar AR, which allows users to interact with fun avatars or create personalised ones for all iOS and Android devices. Why should users of Apple’s iPhone X be the only people to enjoy Animoji?
Banuba is also likely to take advantage of the news that Facebook recently announced it was testing AR ads in its newsfeed, following trials for businesses to show off products within Messenger.
Banuba’s technology won’t simply be for fun apps, however. Inside two years, the company has filed 25 patent applications with the U.S. patent office, and of six of those were processed in record time compared with the average. Its R&D center, staffed by 50 people and based in Minsk, is focused on developing a portfolio of technologies.
Interestingly, Belarus has become famous for AI and facial recognition technologies.
For instance, cast your mind back to early 2016, when Facebook bought Masquerade, a Minsk-based developer of a video filter app, MSQRD, which at one point was one of the most popular apps in the App Store. And in 2017, another Belarusian company, AIMatter, was acquired by Google, only months after raising $2 million. It too took an SDK approach, releasing a platform for real-time photo and video editing on mobile, dubbed Fabby. This was built upon a neural network-based AI platform. But Prokopenya has much bolder plans for Banuba.
In early 2017, he and Banuba launched a “technology-for-equity” program to enroll app developers and publishers across the world. This signed up Inventain, another startup from Belarus, to develop AR-based mobile games.
Prokopenya says the technologies associated with AR will be “leveraged by virtually every kind of app. Any app can recognize its user through the camera: male or female, age, ethnicity, level of stress, etc.” He says the app could then respond to the user in any number of ways. Literally, your apps could be watching you.
So, for instance, a fitness app could see how much weight you’d lost just by using the Banuba SDK to look at your face. Games apps could personalize the game based on what it knows about your face, such as reading your facial cues.
Back in his London office, overlooking a small park, Prokopenya waxes lyrical about the “incredible concentration of diversity, energy and opportunity” of London. “Living in London is fantastic,” he says. “The only thing I am upset about, however, is the uncertainty surrounding Brexit and what it might mean for business in the U.K. in the future.”
London may be great (and will always be), but sitting on his desk is a laptop with direct links back to Minsk, a place where the facial recognition technologies of the future are only now just emerging.
Exploring Just How Marketers Are Using Personalized Chatbots To Drive Business, Lower Costs & Make Customers Happier
In today’s day and age, everything is going digital. Nearly everything is becoming automated or streamlined, and businesses are increasingly looking for new ways to implement cutting-edge and innovative technology like artificial intelligence, virtual reality, and augmented reality.
For instance, in recent years, AI-equipped chatbots have become the talk of the town when it comes to modern business. Not only do these bots allow businesses to connect with their customers in new exciting, engaging, and interactive ways, it also helps them streamline some of their customer service processes.
Furthermore, chatbots have become so popular among businesses, that they have even made their way into the marketer’s arsenal of marketing strategies. How? Read more below!
Chatbots offer Personalized Content Automation
Chatbots run off of artificial intelligence technology, so they have the smart technology needed to compile data, analyze it, and then make informed decisions based off it.
When interacting with customers, this technology allows a chatbot to recommend content to customers based off of previous conversations, purchase histories, and other buying trends, totally unique to the individual customer.
In today’s day and age, the goal of every marketing specialist is to figure out a way to personalize a product, service, or an experience for every unique customer. Now with chatbots, marketers can do just that!
Picture this, imagine you messaged a chatbot for an online retailer inquiring about the availability of a specific product. The chatbot may answer your question first, but then it could recommend a similar product that customers also look at when purchasing the product you had originally inquired about. Now, this is something we see all throughout online retailers, right? But here’s the difference — that chatbot can now message you upon your next visit with a bunch of different recommendations based off of your previous conversations.
Not only can the chatbot recommend products for customers, but it can also then talk about those products in real-time. Customers can interact with the chatbot about different products, ask about potential differences between multiple products, and even ask for recommendations based off of recent reviews! Essentially, each individual customer now has their own personal chatbot shopper, completely tailored to their individual tastes, interests, and preferences. Pretty amazing isn’t it? You could easily see why chatbots are a marketing specialist’s dream!
Digital Solution For A Digital World
At the end of the day, everything comes down to results. For businesses, this is no different. Marketing specialists are constantly tasked with finding ways to drive business through digital means. Chatbots serve as the perfect vehicle for this to happen. And as the world becomes increasingly more geared towards digital technology, you can be sure that businesses will also adapt towards the same.
Driving results means connecting with customers and clients on more than just the surface level. Customers expect a personalized experience now more than ever, and chatbots are exactly what’s going to get them there!
Imagine the Bot as If It Was A Funnel
The best way to explain just how marketers are using bots to drive results is imagine the idea of a funnel. There’s the top of the funnel, where ingredients (in our case, users) are poured in. Then there’s the middle of the funnel, where a mixture of ingredients sort themselves out. Then there’s the bottom of the funnel, where everything comes out in a slow and flowing motion.
Here is a look at the typical website funnel vs a chatbot funnel.
In the marketer’s eyes, once users are first introduced, the key is to make sure they are taken care of well enough to keep them coming back for more (middle of the funnel). Chatbots are really good at finding out what those interests are and then automatically sending users the right messaging thereby increasing their desire. Once their desire is increase, their odds of buying go way up!
After the user purchases, the bot can then be used to send them order updates, as well as handling logistics like shipping and tracking, and even be used to up-sell similar products!
Bots can manage the entire customer journey from discovery, to desire, to purchase, to delivery, to purchasing again.
Here is an example:
Now, here’s where the bot comes in.
Top Of The Funnel
This is precisely where a bot can help to draw customers in to your business, goods, or services. Bots can help to draw customers in through email campaigns, bot marketing, homepage engagements when users visit your website, and even popups on relevant websites. Whatever you can do to ensure that your bot engages potential users early on will ultimately help to draw customers to the top of the funnel.
Middle Of The Funnel
The middle of the funnel involves everything from purchases, transactions, the exchange of services, etc. This is exactly where everything occurs, and the best way your bot can help out is by ensuring that everything runs smoothly. For instance, your bot can help to provide support by reaching out to customers and making sure that they’re satisfied with their product or service. The bot can help users diagnose potential problems or issues that may arise, and the bot can help to design a personalized solution completely unique to each user.
This is the real spot where your bot can shine!
Bottom Of The Funnel
After your users have completed their purchase, abandoned their cart without a sale, or are seeking shipment and tracking information, your bot can be programmed to reach out to them personally with details related to their specific purchase, their unique order, or items that they have viewed through your website. It can help to recommend other products, offer them a coupon to convince them to pick up that once-abandoned cart, and send out detailed tracking information so that your users can be sure of exactly when they can expect to receive their order.
The key here, is that your bot needs to make it clear that it doesn’t simply forget about the users who complete a sale or leave their products behind without purchasing them. Remember, in an increasingly-digitized world, the most important thing business can do is creating a completely personalized user experience — and bots are no different!
Need a website bot? If you’re looking to connect with your customers or clients on a more personal level, give .BOT a shot! .BOT is a domain registry service that allows companies to easily register domains for their bots and integrate them into their current workflow.
Le lundi 12 novembre était une journée noire pour les amateurs de comics et de super-héros. Stan Lee, le célèbre scénariste et éditeur de comics américain, est décédé à l’âge de 95 ans d’une pneumonie aiguë. Figure emblématique de Marvel, Stan Lee est le co-créateur des héros les plus emblématiques de la franchise, à l’image de Spider-Man, Iron Man, Hulk, Thor ou encore Black Panther.
Pour beaucoup, Stan Lee est considéré comme le père des super-héros Marvel et sa disparition touche évidemment des générations entières qui ont grandi et se sont identifiées aux super-héros.
Vous vous en doutez, les artistes ont été nombreux à rendre hommage à une figure aussi emblématique que Stan Lee dans l’univers des comics. Et chez Creapills, on n’a pas pu s’empêcher de vous partager celles qui nous ont touchés. C’est pourquoi nous vous proposons de découvrir ci-dessous 30 des plus beaux hommages créatifs rendus à Stan Lee.
Kelly Peng is an electrical and optical engineer, and founder of Kura AR. She’s built a fusion reactor, a Raman spectrometer, a DIY structured light camera, a linear particle accelerator, and emotional classifiers for likes and dislikes. In short, we have someone who can do anything, and she came in to talk about one of the dark arts (pun obviously intended): optics.
The entire idea of Kura AR is to build an immersive augmented reality experience, and when it comes to AR glasses, there are two ways of doing it. You could go the Google Glass route and use a small OLED and lenses, but these displays aren’t very bright. Alternatively, you could use a diffractive waveguide, like the Hololens. This is a lot more difficult to manufacture, but the payoff will be a much larger field of view and a much more immersive experience.
The lens that Kelly is using in her AR headset is basically a diffraction grating, or a series of parallel lines on a piece of plastic. These diffraction gratings reflect light, but it’s dependent on the wavelength. Therefore, for a full-color system, you need three layers, one for red, one for blue, and another for green. The trick here is how to manufacture this. Kelly took a Hololens lens apart and took a look at it with an electron microscope, which appears to be made via fancy, and expensive, photolithography.
There is another way, though. The feature sizes on this diffraction grating aren’t too small, and this could conceivably be done through injection molding. With a lot of coding, simulation, and testing, Kelly realized this was manufacturable with somewhat standard injection molding processes, would cost only about $60,000 upfront, and would produce a part for one dollar. That’s much better than whatever process is going into the Hololens, and an amazing technical feat that’s bring the future of AR closer than ever before.
This talk gets deep into diffractive optics. It’s jam-packed with the kind of technical detail you’ll need to know if you’re going to hack together your own AR / VR system. In short, it’s the kind of real-world technical talk that we love. Sit back with some popcorn and your notepad.
Augmented reality has been on marketers’ minds for years now — and there’s a good reason for it. Augmented reality (or AR) is a technology that layers computer-generated images on top of the real world. With the pervasiveness of the mobile device around the globe, the majority of consumers have instant access to AR-friendly devices. All they need is a smartphone connected to the Internet, a high-resolution screen, and a camera viewfinder. It’s then up to you as a marketer or developer to create digital animations to superimpose on top of their world.
This reality-bending technology is consistently named as one of the hot development and design trends of the year. But how many businesses and marketers are actually making use of it?
As with other cutting-edge technologies, many have been reluctant to adopt AR into their digital marketing strategy.
Part of it is due to the upfront cost of using and implementing AR. There’s also the learning curve to think about when it comes to designing new kinds of interactions for users. Hesitation may also come from marketers and designers because they’re unsure of how to use this technology.
Augmented reality has some really interesting use cases that you should start exploring for your mobile app. The following post will provide you with examples of what’s being done in the AR space now and hopefully inspire your own efforts to bring this game-changing tech to your mobile app in the near future.
The Future Is Here: AR & VR Icon Set
Looking for an icon set that’ll take you on a journey through AR and VR technology? We’ve got your back. Check out the freebie →
Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.
Augmented Reality: A Game-Changer You Can’t Ignore
Unlike virtual reality, which requires users to purchase pricey headsets in order to be immersed in an altered experience, augmented reality is a more feasible option for developers and marketers. All your users need is a device with a camera that allows them to engage with the external world, instead of blocking it out entirely.
And that’s essentially the crux of why AR will be so important for mobile app companies.
This is a technology that enables mobile app users to view the world through your “filter.” You’re not asking them to get lost in another reality altogether. Instead, you want to merge their world with your own. And this is something websites have been unable to accomplish as most interactions are lacking in this level of interactivity.
Let’s take e-commerce websites, for example. Although e-commerce sales increase year after year, people still flock to brick-and-mortar stores in droves (especially for the holiday season). Why? Well, part of it has to do with the fact that they can get their hands on products, test things out and talk to people in real time as they ponder a purchase. Online, it’s a gamble.
As you can imagine, AR in a mobile app can change all that. Augmented reality allows for more meaningful engagements between your mobile app (and brand) and your user. That’s not all though. Augmented reality that connects to geolocation features could make users’ lives significantly easier and safer too. And there’s always the entertainment application of it.
“Augmented reality will be a valuable addition to a lot of existing web pages. For example, it can help people learn on education sites and allow potential buyers to visualize objects in their home while shopping.”
But those aren’t the only applications of AR in mobile apps, which is why I think many mobile app developers and marketers have shied away from it thus far. There are some really interesting examples of this out there though, and I’d like to introduce you to them in the hopes it’ll inspire your own efforts in 2019 and beyond.
Social Media AR
For many of us, augmented reality is already part of our everyday lives, whether we’re the ones using it or we’re viewing content created by others using it. What am I talking about? Social media, of course.
There are three platforms, in particular, that make use of this technology right now.
Snapchat could have included a basic camera integration so that users could take and send photos and videos of themselves to others. But it’s taken it a step further with face mapping software that allows users to apply different “filters” to themselves. Unlike traditional filters which alter the gradients or saturation of a photo, however, these filters are often animated and move as the user moves.
Instagram is another social media platform that has adopted this tech:
Instagram’s Stories allow users to apply augmented filters that “stick” to the face or screen. As with Snapchat, there are some filters that animate when users open their mouths, raise their eyebrows or make other movements with their faces.
One other social media channel that’s gotten into this — that isn’t really a social media platform at all — is Facebook’s Messenger service:
Seeing as how users have flocked to AR filters on Snapchat and Instagram, it makes sense that Facebook would want to get in on the game with its mobile property.
Your mobile app doesn’t have to be a major social network in order to reap the benefits of image and video filters.
If your app provides a networking or communication component — in-app chat with other users, photo uploads to profiles and so on — you could easily adopt similar AR filters to make the experience more modern and memorable for your users.
Video Objects AR
It’s not just your users’ faces that can be mapped and altered through the use of augmented reality. Spaces can be mapped as well.
While I will go on to talk about pragmatic applications of space mapping and AR shortly, I do want to address another way in which it can be used.
At first glance, it might appear to be just another mobile app that enables users to draw on their photos or videos. But what’s interesting about this is the 3D and “sticky” aspects of it. Users can draw shapes of all sizes, colors and complexities within a 3D space. Those elements then stick to the environment. No matter where the users’ cameras move, the objects hold in place.
LeoApp AR is another app that plays with space in a fun way:
As you can see here, I’m attempting to map this gorilla onto my desk, but any flat surface will do.
I now have a dancing gorilla making moves all over my workspace. This isn’t the only kind of animation you can put into place and it’s not the only size either. There are other holographic animations that can be sized to fit your actual physical space. For example, if you wanted to chill out side-by-side with them or have them accompany you as you give a presentation.
The examples I’ve presented above aren’t the full representation of what can be done with these mobile apps. While users could use these for social networking purposes (alongside other AR filters), I think an even better use of this would be to liven up professional video.
Video plays such a big part in marketing and will continue to do so in the future. It’s also something we can all readily do now with our smartphones; no special equipment is needed.
As such, I think that adding 3D messages or objects into a branded video might be a really cool use case for this technology. Rather than tailor your mobile app to consumers who are already enjoying the benefits of AR on social media, this could be marketed to businesses that want to shake things up for their brand.
Thanks to all the hubbub surrounding Pokémon Go a few years back, gaming is one of the better known examples of augmented reality in mobile apps today.
The app is still alive and well and that may be because we’re not hearing as many stories about people becoming seriously injured (or even dying) from playing it anymore.
This is something that should be taken into close consideration before developing an AR mobile app. When you ask users to take part in augmented reality outside the safety of a confined space, there’s no way to control what they do afterwards. And that could do some serious damage to your brand if users get injured while playing or just generally wreak havoc out in the public forum (like all those PG users who were banned from restaurants).
The app maps a flat surface — be it a smaller version on a desk or a larger version placed on your floor — and allows users to shoot hoops. It’s a great way to distract and entertain oneself or even challenge friends, family or colleagues to a game of HORSE.
You could, of course, build an entire mobile app around an AR game as these two examples have shown.
You could also think of ways to gamify other mobile app experiences with AR. I imagine this could be used for something like a restaurant app. For example, a pizza restaurant wants to get more users to install the app and to order food from them. With a big sporting event like the Super Bowl coming up, a “Play” tab is added to the app, letting users throw pizzas down the field. It would certainly be a fun distraction while waiting for their real pizzas to arrive.
Bottom line: get creative with this. AR games aren’t just for gaming apps.
Home Improvement AR
As you’ve already seen, augmented reality enables us to map physical spaces and stick interactive objects to them. In the case of home improvement, this technology is being used to help consumers make purchasing decisions from the comfort of their home (or at their job or on their commute to work, etc.)
IKEA is one such brand that’s capitalized on this opportunity.
To start, here is my attempt at shopping for a new desk for my workspace. I selected the product I was interested in and then I placed it into my office. Specifically, I put the accurately sized 3D desk projection in front of my current desk, so I could get a sense for how the two differ and how this new one would fit.
While product specifications online are all well and good, consumers still struggle with making purchases since they can’t truly envision how those products will (physically) fit into their lives. The IKEA Place app is aiming to change all of that.
The IKEA app is also improving the shopping experience with the feature above.
Users open their camera and point it at any object they find in the real world. Maybe they were impressed by a bookshelf they saw at a hotel they stayed in or they really liked some patio chairs their friends had. All they have to do is snap a picture and let IKEA pair them with products that match the visual description.
As you can see, IKEA has given me a number of options not just for the chair I was interested in, but also a full table set.
If you have or want to build a mobile app that sells products to B2C or B2B consumers and these products need to fit well into their physical environments, think about what a functionality like this would do for your mobile app sales. You could save time having to schedule on-site appointments or conduct lengthy phone calls whereby salespeople try to convince them that the products, equipment or furniture will fit. Instead, you let the consumers try it for themselves.
It’s not just the physical spaces of consumers that could use improvement. Your mobile app users want to better themselves as well. In the past, they’d either have to go somewhere in person to try on the new look or they’d have to gamble with an online purchase. Thanks to AR, that isn’t the case anymore.
In the past, these hair color tryouts used to look really bad. You’d upload a photo of your face and the website would slap very fake-looking hair onto your head. It would give users an idea of how the color or style worked with their skin tone, eye shape and so on, but it wasn’t always spot-on which would make the experience quite unhelpful.
As you can see here, not only does this app replace my usually mousy-brown hair color with a cool new blond shade, but it stays with me as I turn my head around:
Sephora is another beauty company that’s taking advantage of AR mapping technology.
Here is an example of me feeling not so sure about the makeup palette I’ve chosen. But that’s the beauty of this app. Rather than force customers to buy a bunch of expensive makeup they think will look great or to try and figure out how to apply it on their own, this AR app does all the work.
Anyone remember the movie The Craft? I totally felt like that using this app.
If your app sells self-improvement or beauty products, or simply advises users on next steps they should take, think about how AR could transform that experience. You want your users to be confident when making big changes — whether it be how they wear their makeup for date night or the next tattoo they put on their body. This could be what convinces them to take the leap.
Finally, I want to talk about how AR has and is about to transform users’ experiences in the real world.
Now, I’ve already mentioned Pokémon Go and how it utilizes the GPS of a users’ mobile device. This is what enables them to chase those little critters anywhere they go: restaurants, stores, local parks, on vacation, etc.
But what if we look outside the box a bit? Geo-related AR doesn’t just help users discover things in their physical surroundings. It could simply be used as a way to improve the experience of walking about in the real world.
Think about the last time you traveled to a foreign destination. You may have used a translation guidebook to look up phrases you didn’t know. You might have also asked your voice assistant to translate something for you. But think about how great it would be if you didn’t have to do all that work to understand what’s right in front of you. A road sign. A menu. A magazine article.
In this example, I’ve scanned an English phrase I wrote out: “Where is the bathroom?” Once I selected the language I wanted to translate from and to, as well as indicated which text I wanted to focus on, Google Translate attempted to provide a translation:
It’s not 100% accurate — which may be due to my sloppy handwriting — but it would certainly get the job done for users who need a quick way to translate text on the go.
There are other mobile apps that are beginning to make use of this geo-related AR.
For instance, there’s one called Find My Car that I took for a test spin. I don’t think the technology is fully ready yet as it couldn’t accurately “pin” my car’s location, but it’s heading in the right direction. In the future, I expect to see more directional apps — especially, Google and Apple Maps — use AR to improve directional awareness and guidance for users.
There are challenges in using AR, that’s for sure. The cost of developing AR is one. Finding the perfect application of AR that’s unique to your brand and truly improves the mobile app user experience is another. There’s also the fact it requires users to download a mobile app, so there’s a lot of work to be done to motivate them to do so.
Gimmicks just won’t work — especially if you expect users to download your app and make use of it (remember: retention rates aren’t just about downloads). You have to make the augmented reality feature something that’s worth engaging. The first place to start is with your data. As Jordan Thomson wrote:
“AR is a lot more dependent on customer activity than VR, which is far older technology and is perhaps most synonymous with gaming. Designers should make use of big data and analytics to understand their customers’ wants and needs.”
I’d also advise you to spend some time in the apps above. Get a sense for how the technology works and discover what makes it so appealing on a personal level. Compare it to your own mobile app’s goals and see if there’s a way to take AR from just being an idea you’re tossing around to a reality.
C’est à Beijing, pour la conférence Intel AI Devcon, que la marque a décidé de présenter ce Neural Compute Stick 2 qui n’est évidemment pas un produit grand public. La petite clé est toujours aussi compacte et promet des usages d’Intelligence Artificielle et de reconnaissance et analyse d’images grâce à la dernière version de la puce du fondeur.
A bord du Neural Compute Stick 2, on retrouve donc la toute dernière solution Myriad X issue du rachat de Movidius. Elle apportera les mêmes capacités que la précédente version avec la possibilité d’utiliser plusieurs clés en parallèle pour accélérer les calculs mais également offrir du prototypage plus facile à bord de tout type de solution.
L’exemple le plus frappant est la création de drones pouvant analyser des objets captés par une ou plusieurs caméras afin d’éviter par exemple des obstacles, suivre des personnes ou compter des éléments.
Ce genre d’usage est en plein essor et de nombreux débouchés sont imaginables pour des systèmes de reconnaissance d’images. On imagine ce que pourraient donner ce genre de montage pour des services de livraisons autonomes, des robots destinés à l’emballage et à la préparation de commandes, des systèmes de vidéo surveillance ou de vidéo verbalisation par exemple. Ce sont en général les premiers éléments qui viennent à l’esprit. Mais les usages sont beaucoup plus variés :
Imaginez des systèmes capables de détecter la présence de certaines bactéries dans l’eau en analysant simplement des prélèvements sous un une webcam microscope de manière aisée et sans avoir a recourir à un spécialiste. Cela de manière autonome, sans avoir a utiliser un serveur distant mais avec le simple apprentissage d’un algorithme en local.
Intel poursuit sur sa lancée et en proposant un objet compact et accessible financièrement, la marque permet à de nombreux développeurs d’ajouter ce type de fonctionnalités dans différents montages. Bien entendu la seconde étape sera la commercialisation des puces Myriad X dans des solutions grand public et industrielles. C’est probablement là qu’Intel fera un réel bénéfice. Quand un constructeur automobile, un fabricant de caméras ou un concepteur de robot décidera d’implanter ces puces en série pour ses nouveaux produits.
The Amazon Echo Show and Lenovo Smart Display are two popular smart displays, but which one is right for you? Learn about the displays, speakers, capabilities, and other important features before you decide which to buy.
Google Cloud is getting a managed cron service for running batch jobs. Cloud Scheduler, as the new service is called, provides all the functionality of the kind of standard command-line cron service you probably love to hate, but with the reliability and ease of use of running a managed service in the cloud.
The targets for Cloud Scheduler jobs can be any HTTP/S endpoints and Google’s own Cloud Pub/Sub topics and App Engine applications. Developers can manage these jobs through a UI in the Google Cloud Console, a command-line interface and through an API.
“Job schedulers like cron are a mainstay of any developer’s arsenal, helping run scheduled tasks and automating system maintenance,” Google product manager Vinod Ramachandran notes in today’s announcement. “But job schedulers have the same challenges as other traditional IT services: the need to manage the underlying infrastructure, operational overhead of manually restarting failed jobs and lack of visibility into a job’s status.”
As Ramachandran also notes, Cloud Scheduler, which is currently in beta, guarantees the delivery of a job to the target, which ensures that important jobs are indeed started and if you’re sending the job to AppEngine or Pub/Sub, those services will also return a success code — or an error code, if things go awry. The company stresses that Cloud Scheduler also makes it easy to automate retries when things go wrong.
Google is obviously not the first company to hit upon this concept. There are a few startups that also offer a similar service, and Google’s competitors like Microsoft also offer comparable tools.
Google provides developers with a free quota of three (3) jobs per month. Additional jobs cost $0.10 per month.
Most animators would agree: making a cataclysmic explosion destroy a planet is easy, but human figures and delicate interactions are hard.
That’s why engineers from The Georgia Institute of Technology and Google Brain teamed up to build a cute little AI agent — an AI algorithm embodied in a simulated world — that learned to dress itself using realistic fabric textures and physics.
The AI agent takes the form of a wobbling, cartoonish little friend with an expressionless demeanor.
During its morning routine, our little buddy punches new armholes through its shirts, gets bopped around by perturbations, dislocates its shoulder, and has an automatic gown-enrober smoosh up against its face. What a day!
Beyond a fun video, this simulation shows that AI systems can learn to interact with the physical world, or at least a realistic simulation of it, all on their own.