Shared posts

14 Nov 21:30

Use Case For Augmented Reality In Design

by Suzanne Scacca
Use Case For Augmented Reality In Design

Use Case For Augmented Reality In Design

Suzanne Scacca

Augmented reality has been on marketers’ minds for years now — and there’s a good reason for it. Augmented reality (or AR) is a technology that layers computer-generated images on top of the real world. With the pervasiveness of the mobile device around the globe, the majority of consumers have instant access to AR-friendly devices. All they need is a smartphone connected to the Internet, a high-resolution screen, and a camera viewfinder. It’s then up to you as a marketer or developer to create digital animations to superimpose on top of their world.

This reality-bending technology is consistently named as one of the hot development and design trends of the year. But how many businesses and marketers are actually making use of it?

As with other cutting-edge technologies, many have been reluctant to adopt AR into their digital marketing strategy.

Part of it is due to the upfront cost of using and implementing AR. There’s also the learning curve to think about when it comes to designing new kinds of interactions for users. Hesitation may also come from marketers and designers because they’re unsure of how to use this technology.

Augmented reality has some really interesting use cases that you should start exploring for your mobile app. The following post will provide you with examples of what’s being done in the AR space now and hopefully inspire your own efforts to bring this game-changing tech to your mobile app in the near future.

The Future Is Here: AR & VR Icon Set

Looking for an icon set that’ll take you on a journey through AR and VR technology? We’ve got your back. Check out the freebie →

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Augmented Reality: A Game-Changer You Can’t Ignore

Unlike virtual reality, which requires users to purchase pricey headsets in order to be immersed in an altered experience, augmented reality is a more feasible option for developers and marketers. All your users need is a device with a camera that allows them to engage with the external world, instead of blocking it out entirely.

And that’s essentially the crux of why AR will be so important for mobile app companies.

This is a technology that enables mobile app users to view the world through your “filter.” You’re not asking them to get lost in another reality altogether. Instead, you want to merge their world with your own. And this is something websites have been unable to accomplish as most interactions are lacking in this level of interactivity.

Let’s take e-commerce websites, for example. Although e-commerce sales increase year after year, people still flock to brick-and-mortar stores in droves (especially for the holiday season). Why? Well, part of it has to do with the fact that they can get their hands on products, test things out and talk to people in real time as they ponder a purchase. Online, it’s a gamble.

As you can imagine, AR in a mobile app can change all that. Augmented reality allows for more meaningful engagements between your mobile app (and brand) and your user. That’s not all though. Augmented reality that connects to geolocation features could make users’ lives significantly easier and safer too. And there’s always the entertainment application of it.

If you’re struggling with retention rates for your app, developing a useful and interactive AR experience could be the key to winning more loyal users in the coming year.

Inspiring Examples Of Augmented Reality

To determine what kind of augmented reality makes the most sense for your website or app, look to examples of companies that have already adopted and succeeded in using this technology.

As Google suggests:

“Augmented reality will be a valuable addition to a lot of existing web pages. For example, it can help people learn on education sites and allow potential buyers to visualize objects in their home while shopping.”

But those aren’t the only applications of AR in mobile apps, which is why I think many mobile app developers and marketers have shied away from it thus far. There are some really interesting examples of this out there though, and I’d like to introduce you to them in the hopes it’ll inspire your own efforts in 2019 and beyond.

Social Media AR

For many of us, augmented reality is already part of our everyday lives, whether we’re the ones using it or we’re viewing content created by others using it. What am I talking about? Social media, of course.

There are three platforms, in particular, that make use of this technology right now.

Snapchat was the first:

Snapchat filter
Trying out a silly filter on Snapchat (Source: Snapchat) (Large preview)

Snapchat could have included a basic camera integration so that users could take and send photos and videos of themselves to others. But it’s taken it a step further with face mapping software that allows users to apply different “filters” to themselves. Unlike traditional filters which alter the gradients or saturation of a photo, however, these filters are often animated and move as the user moves.

Instagram is another social media platform that has adopted this tech:

Instagram filter
Instagram filters go beyond making a face look cute. (Source: Instagram) (Large preview)

Instagram’s Stories allow users to apply augmented filters that “stick” to the face or screen. As with Snapchat, there are some filters that animate when users open their mouths, raise their eyebrows or make other movements with their faces.

One other social media channel that’s gotten into this — that isn’t really a social media platform at all — is Facebook’s Messenger service:

Messenger filters
Users can have fun while sending photos or video chatting on Messenger. (Source: Messenger) (Large preview)

Seeing as how users have flocked to AR filters on Snapchat and Instagram, it makes sense that Facebook would want to get in on the game with its mobile property.

Use Case

Your mobile app doesn’t have to be a major social network in order to reap the benefits of image and video filters.

If your app provides a networking or communication component — in-app chat with other users, photo uploads to profiles and so on — you could easily adopt similar AR filters to make the experience more modern and memorable for your users.

Video Objects AR

It’s not just your users’ faces that can be mapped and altered through the use of augmented reality. Spaces can be mapped as well.

While I will go on to talk about pragmatic applications of space mapping and AR shortly, I do want to address another way in which it can be used.

Take a look at 3DBrush:

3D objects in 3DBrush
Adding 3D objects to video with 3DBrush. (Source: 3DBrush)

At first glance, it might appear to be just another mobile app that enables users to draw on their photos or videos. But what’s interesting about this is the 3D and “sticky” aspects of it. Users can draw shapes of all sizes, colors and complexities within a 3D space. Those elements then stick to the environment. No matter where the users’ cameras move, the objects hold in place.

LeoApp AR is another app that plays with space in a fun way:

LeoApp surface mapping
LeoApp maps a flat surface for object placement. (Source: LeoApp AR) (Large preview)

As you can see here, I’m attempting to map this gorilla onto my desk, but any flat surface will do.

Dancing gorilla projection
A gorilla dances on my desk, thanks to LeoApp AR. (Source: LeoApp AR)

I now have a dancing gorilla making moves all over my workspace. This isn’t the only kind of animation you can put into place and it’s not the only size either. There are other holographic animations that can be sized to fit your actual physical space. For example, if you wanted to chill out side-by-side with them or have them accompany you as you give a presentation.

Use Case

The examples I’ve presented above aren’t the full representation of what can be done with these mobile apps. While users could use these for social networking purposes (alongside other AR filters), I think an even better use of this would be to liven up professional video.

Video plays such a big part in marketing and will continue to do so in the future. It’s also something we can all readily do now with our smartphones; no special equipment is needed.

As such, I think that adding 3D messages or objects into a branded video might be a really cool use case for this technology. Rather than tailor your mobile app to consumers who are already enjoying the benefits of AR on social media, this could be marketed to businesses that want to shake things up for their brand.

Gaming AR

Thanks to all the hubbub surrounding Pokémon Go a few years back, gaming is one of the better known examples of augmented reality in mobile apps today.

Pokemon Go animates environment
My dog hides in the bushes from Pokemon. (Source: Pokémon Go) (Large preview)

The app is still alive and well and that may be because we’re not hearing as many stories about people becoming seriously injured (or even dying) from playing it anymore.

This is something that should be taken into close consideration before developing an AR mobile app. When you ask users to take part in augmented reality outside the safety of a confined space, there’s no way to control what they do afterwards. And that could do some serious damage to your brand if users get injured while playing or just generally wreak havoc out in the public forum (like all those PG users who were banned from restaurants).

This is probably why we see AR more used in games like AR Sports Basketball these days.

Play basketball anywhere
Users can map a basketball hoop onto any flat surface with AR Sports Basketball. (Source: AR Sports Basketball)

The app maps a flat surface — be it a smaller version on a desk or a larger version placed on your floor — and allows users to shoot hoops. It’s a great way to distract and entertain oneself or even challenge friends, family or colleagues to a game of HORSE.

Use Case

You could, of course, build an entire mobile app around an AR game as these two examples have shown.

You could also think of ways to gamify other mobile app experiences with AR. I imagine this could be used for something like a restaurant app. For example, a pizza restaurant wants to get more users to install the app and to order food from them. With a big sporting event like the Super Bowl coming up, a “Play” tab is added to the app, letting users throw pizzas down the field. It would certainly be a fun distraction while waiting for their real pizzas to arrive.

Bottom line: get creative with this. AR games aren’t just for gaming apps.

Home Improvement AR

As you’ve already seen, augmented reality enables us to map physical spaces and stick interactive objects to them. In the case of home improvement, this technology is being used to help consumers make purchasing decisions from the comfort of their home (or at their job or on their commute to work, etc.)

IKEA is one such brand that’s capitalized on this opportunity.

 IKEA product placement
Place IKEA products around your home or office. (Source: IKEA) (Large preview)

To start, here is my attempt at shopping for a new desk for my workspace. I selected the product I was interested in and then I placed it into my office. Specifically, I put the accurately sized 3D desk projection in front of my current desk, so I could get a sense for how the two differ and how this new one would fit.

While product specifications online are all well and good, consumers still struggle with making purchases since they can’t truly envision how those products will (physically) fit into their lives. The IKEA Place app is aiming to change all of that.

IKEA product search
Take a photo with the IKEA map and search related products. (Source: IKEA) (Large preview)

The IKEA app is also improving the shopping experience with the feature above.

Users open their camera and point it at any object they find in the real world. Maybe they were impressed by a bookshelf they saw at a hotel they stayed in or they really liked some patio chairs their friends had. All they have to do is snap a picture and let IKEA pair them with products that match the visual description.

IKEA search results
IKEA pairs app users with relevant product results. (Source: IKEA) (Large preview)

As you can see, IKEA has given me a number of options not just for the chair I was interested in, but also a full table set.

Use Case

If you have or want to build a mobile app that sells products to B2C or B2B consumers and these products need to fit well into their physical environments, think about what a functionality like this would do for your mobile app sales. You could save time having to schedule on-site appointments or conduct lengthy phone calls whereby salespeople try to convince them that the products, equipment or furniture will fit. Instead, you let the consumers try it for themselves.

Self-Improvement AR

It’s not just the physical spaces of consumers that could use improvement. Your mobile app users want to better themselves as well. In the past, they’d either have to go somewhere in person to try on the new look or they’d have to gamble with an online purchase. Thanks to AR, that isn’t the case anymore.

L’Oreal has an app called Style My Hair:

L’Oreal hair color tryout
Try out a new realistic hair color with the L’Oreal app. (Source: Style My Hair) (Large preview)

In the past, these hair color tryouts used to look really bad. You’d upload a photo of your face and the website would slap very fake-looking hair onto your head. It would give users an idea of how the color or style worked with their skin tone, eye shape and so on, but it wasn’t always spot-on which would make the experience quite unhelpful.

As you can see here, not only does this app replace my usually mousy-brown hair color with a cool new blond shade, but it stays with me as I turn my head around:

L’Oreal hair mapping example
L’Oreal applies new hair color any which way users turn. (Source: Style My Hair) (Large preview)

Sephora is another beauty company that’s taking advantage of AR mapping technology.

Sephora makeup testing
Try on beauty products with the Sephora app. (Source: Sephora) (Large preview)

Here is an example of me feeling not so sure about the makeup palette I’ve chosen. But that’s the beauty of this app. Rather than force customers to buy a bunch of expensive makeup they think will look great or to try and figure out how to apply it on their own, this AR app does all the work.

Use Case

Anyone remember the movie The Craft? I totally felt like that using this app.

The Craft magic
The Craft hair-changing clip definitely inspired this example. (Source: The Craft)

If your app sells self-improvement or beauty products, or simply advises users on next steps they should take, think about how AR could transform that experience. You want your users to be confident when making big changes — whether it be how they wear their makeup for date night or the next tattoo they put on their body. This could be what convinces them to take the leap.

Geo AR

Finally, I want to talk about how AR has and is about to transform users’ experiences in the real world.

Now, I’ve already mentioned Pokémon Go and how it utilizes the GPS of a users’ mobile device. This is what enables them to chase those little critters anywhere they go: restaurants, stores, local parks, on vacation, etc.

But what if we look outside the box a bit? Geo-related AR doesn’t just help users discover things in their physical surroundings. It could simply be used as a way to improve the experience of walking about in the real world.

Think about the last time you traveled to a foreign destination. You may have used a translation guidebook to look up phrases you didn’t know. You might have also asked your voice assistant to translate something for you. But think about how great it would be if you didn’t have to do all that work to understand what’s right in front of you. A road sign. A menu. A magazine article.

The Google Translate app is attempting to bridge this divide for us:

Google Translate camera search
Google Translate uses the camera to find foreign text. (Source: Google Translate) (Large preview)

In this example, I’ve scanned an English phrase I wrote out: “Where is the bathroom?” Once I selected the language I wanted to translate from and to, as well as indicated which text I wanted to focus on, Google Translate attempted to provide a translation:

Google provides a translation
Google Translate provides a translation of photographed text. (Source: Google Translate) (Large preview)

It’s not 100% accurate — which may be due to my sloppy handwriting — but it would certainly get the job done for users who need a quick way to translate text on the go.

Use Case

There are other mobile apps that are beginning to make use of this geo-related AR.

For instance, there’s one called Find My Car that I took for a test spin. I don’t think the technology is fully ready yet as it couldn’t accurately “pin” my car’s location, but it’s heading in the right direction. In the future, I expect to see more directional apps — especially, Google and Apple Maps — use AR to improve directional awareness and guidance for users.

Wrapping Up

There are challenges in using AR, that’s for sure. The cost of developing AR is one. Finding the perfect application of AR that’s unique to your brand and truly improves the mobile app user experience is another. There’s also the fact it requires users to download a mobile app, so there’s a lot of work to be done to motivate them to do so.

Gimmicks just won’t work — especially if you expect users to download your app and make use of it (remember: retention rates aren’t just about downloads). You have to make the augmented reality feature something that’s worth engaging. The first place to start is with your data. As Jordan Thomson wrote:

“AR is a lot more dependent on customer activity than VR, which is far older technology and is perhaps most synonymous with gaming. Designers should make use of big data and analytics to understand their customers’ wants and needs.”

I’d also advise you to spend some time in the apps above. Get a sense for how the technology works and discover what makes it so appealing on a personal level. Compare it to your own mobile app’s goals and see if there’s a way to take AR from just being an idea you’re tossing around to a reality.

Smashing Editorial (ra, yk, il)
14 Nov 20:25

Intel annonce le Neural Compute Stick 2

by Pierre Lecourt

C’est à Beijing, pour la conférence Intel AI Devcon, que la marque a décidé de présenter ce Neural Compute Stick 2 qui n’est évidemment pas un produit grand public. La petite clé est toujours aussi compacte et promet des usages d’Intelligence Artificielle et de reconnaissance et analyse d’image grâce à la dernière version de la puce du fondeur.

2018-11-14 15_51_10-minimachines.net

A bord du Neural Compute Stick 2 on retrouve donc la toute dernière solution Myriad X issue du rachat de Movidius. elle apportera les mêmes capacités que la précédente version avec la possibilité d’utiliser plusieurs clés en parallèle pour accélérer les calculs mais également offrir du prototypage plus facile à bord de toute types de solution.

2018-11-14 15_49_26-minimachines.net

L’exemple le plus frappant est la création de drones pouvant analyser des objets captés par une ou plusieurs caméras afin d’éviter par exemple des obstacles,n suivre des personnes ou compter des éléments. 

Ce genre d’usage est en plein essor et de nombreux débouchés sont imaginables pour des systèmes de reconnaissance d’image. On imagine ce que pourraient donner ce genre de montage pour des services de livraisons autonomes, des robots destinés à l’emballage et çà la préparation de commande, des systèmes de vidéo surveillance ou de vidéo verbalisation par exemple. Ce sont en général les premiers éléments qui viennent à l’esprit. Mais les usages sont beaucoup plus variés :

2018-11-14 16_01_21-minimachines.net

Imaginez des systèmes capables de détecter la présence de certaines bactéries dans l’eau en analysant simplement des prélèvements sous un une webcam microscope de manière aisée et sans avoir a recourir à un spécialiste. Cela de manière autonome, sans avoir  a utiliser un serveur distant mais avec le simple apprentissage d’un algorithme en local.

2018-11-14 16_00_50-minimachines.net

Des systèmes capables d’alerter un dermatologue sur une anomalie de la peau pouvant dériver en cancer avec un simple examen visuel, 

2018-11-14 16_05_40-minimachines.net

Ou un robot capable d’analyser des images en ligne et de détecter tout contenu pédophile afin d’alerter les hébergeurs et les autorités sur les fichiers stockés…

Intel poursuit sur sa lancée et en proposant un objet compact et accessible financièrement, la marque permet à de nombreux développeurs d’ajouter ce type de fonctionnalités dans différents montages. Bien entendu la seconde étape sera la commercialisation des puces Myriad X dans des solutions grand public et industrielles. C’est probablement là qu’Intel fera un réel bénéfice. Quand un constructeur automobile, un fabricant de caméras ou un concepteur de robot décidera d’implanter ces puces en série pour ses nouveaux produits.

Intel Movidius Neural Compute Stick : De l’intelligence offline

Source : Intel

Intel annonce le Neural Compute Stick 2 © MiniMachines.net. 2018

08 Nov 21:49

Amazon Echo Show vs. Lenovo Smart Display

by Tyler Lacoma

The Amazon Echo Show and Lenovo Smart Display are two popular smart displays, but which one is right for you? Learn about the displays, speakers, capabilities, and other important features before you decide which to buy.

The post Amazon Echo Show vs. Lenovo Smart Display appeared first on Digital Trends.

06 Nov 20:52

Google launches Cloud Scheduler, a managed cron service

by Frederic Lardinois

Google Cloud is getting a managed cron service for running batch jobs. Cloud Scheduler, as the new service is called, provides all the functionality of the kind of standard command-line cron service you probably love to hate, but with the reliability and ease of use of running a managed service in the cloud.

The targets for Cloud Scheduler jobs can be any HTTP/S endpoints and Google’s own Cloud Pub/Sub topics and App Engine applications. Developers can manage these jobs through a UI in the Google Cloud Console, a command-line interface and through an API.

“Job schedulers like cron are a mainstay of any developer’s arsenal, helping run scheduled tasks and automating system maintenance,” Google product manager Vinod Ramachandran notes in today’s announcement. “But job schedulers have the same challenges as other traditional IT services: the need to manage the underlying infrastructure, operational overhead of manually restarting failed jobs and lack of visibility into a job’s status.”

As Ramachandran also notes, Cloud Scheduler, which is currently in beta, guarantees the delivery of a job to the target, which ensures that important jobs are indeed started and if you’re sending the job to AppEngine or Pub/Sub, those services will also return a success code — or an error code, if things go awry. The company stresses that Cloud Scheduler also makes it easy to automate retries when things go wrong.

Google is obviously not the first company to hit upon this concept. There are a few startups that also offer a similar service, and Google’s competitors like Microsoft also offer comparable tools.

Google provides developers with a free quota of three (3) jobs per month. Additional jobs cost $0.10 per month.

05 Nov 20:47

Having a Bad Day? An Adorable Video Shows AI Learning to Get Dressed

by Dan Robitzski

Rise and Shine

Most animators would agree: making a cataclysmic explosion destroy a planet is easy, but human figures and delicate interactions are hard.

That’s why engineers from The Georgia Institute of Technology and Google Brain teamed up to build a cute little AI agent — an AI algorithm embodied in a simulated world — that learned to dress itself using realistic fabric textures and physics.

Blessed

The AI agent takes the form of a wobbling, cartoonish little friend with an expressionless demeanor.

During its morning routine, our little buddy punches new armholes through its shirts, gets bopped around by perturbations, dislocates its shoulder, and has an automatic gown-enrober smoosh up against its face. What a day!

Great Job!

Beyond a fun video, this simulation shows that AI systems can learn to interact with the physical world, or at least a realistic simulation of it, all on their own.

This is thanks to reinforcement learning, a type of AI algorithm where the agent learns to accomplish tasks by seeking out programmed rewards.

In this case, our little friend was programmed to seek out the warm satisfaction of a job well done, and we’re very proud.

READ MORE: Using machine learning to teach robots to get dressed [BoingBoing]

More on cutesy tech: You Can’t Make This Stuff Up: Amazon Warehouse Robots Slipped On Popcorn Butter

30 Oct 12:04

Espaciel : le réflecteur qui augmente la luminosité naturelle de vos pièces de 50%

by Claire L.

Quels que soient votre habitation et votre mode de vie et malgré la révolution qu’a été l’invention de l’ampoule en 1879 par Thomas Edison, il faut avouer que rien n’est plus agréable que la lumière naturelle du ciel. Pourtant, les inventions pour favoriser l’accès à la lumière directement du ciel restent des avancées très récentes.

On peut évidemment citer Velux qui, dans les années 60, a conçu les premières fenêtres intégrées aux toitures, ou encore Solatube, cette société californienne qui, dans les années 90, a eu l’idée de concevoir un tube qui traverse la toiture pour amener la lumière naturelle dans les pièces (le tout sans fenêtre). Mais tout comme Velux, ces installations coûtent cher et demandent un grand nombre de travaux… jusqu’à l’arrivée en 2013 d’une startup française qui a eu une idée lumineuse.

Un réflecteur pour augmenter de 50% la luminosité de vos pièces ☀️

L’idée d’Espaciel (c’est son nom) est tout simplement de placer des réflecteurs de lumière à l’extérieur devant vos fenêtres, pour amplifier l’entrée de la lumière du jour chez vous. Pour mieux comprendre comment les réflecteurs d’Espaciel fonctionnent, voici une petite vidéo explicative.

Vous l’avez compris, tout le génie de cette invention réside dans le fait qu’elle ne demande pas de travaux pour être mise en place. Mais ces réflecteurs ont d’autres avantages que de pouvoir s’installer seuls et rapidement. Conçus à base de matériaux résistants et d’aluminium, ils s’avèrent 30% plus réfléchissants qu’un miroir tout en étant 5 fois plus légers. Résultat des courses ? Vos pièces bénéficient de 50% de luminosité en plus ; pour un ordre de comparaison, c’est comme si vos pièces gagnaient 2 étages de plus !

Ces mêmes réflecteurs fonctionnent tous les jours de l’année car utilisent la luminosité du ciel et non du soleil pour augmenter le bien-être de votre intérieur ; les effets de la lumière naturelle sur le corps et l’esprit ne sont plus à prouver. Ainsi, ils n’éblouissent pas, sont incassables… et pour couronner le tout, sont fabriqués en France !

Une petite révolution donc pour améliorer la luminosité des pièces de son appartement ou de sa maison. Car la société a pensé à toutes les configurations : après avoir démarré par un réflecteur qui se fixe à la fenêtre, elle a développé d’autres produits, comme le réflecteur balcon ou encore jardin qui, placé sur un pied, se pose où vous le souhaitez pour optimiser un maximum la luminosité de votre logement ! Une idée également judicieuse pour profiter un maximum de la lumière pendant le passage à l’heure d’hiver afin de doper votre vitalité et lutter contre le blues hivernal.

Pour tout savoir sur Espaciel et commander un réflecteur (entre 79 et 259 euros), rendez-vous sur le site espaciel.com.

Réflecteur fenêtre

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Réflecteur balcon

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Réflecteur terrasse

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Imaginé par : Espaciel
Source : espaciel.com

Cet article Espaciel : le réflecteur qui augmente la luminosité naturelle de vos pièces de 50% provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

30 Oct 10:27

Say ‘Hi’ to Nybble, an open-source robotic kitten

by John Biggs

If you’ve ever wanted to own your own open-source cat, this cute Indiegogo project might be for you. The project, based on something called the Open Cat, is a laser-cut cat that walks and “learns” and can even connect to a Raspberry Pi. Out of the box a complex motion controller allows the kitten to perform lifelike behaviors like balancing, walking and nuzzling.

“Nybble’s motion is driven by an Arduino compatible micro-controller. It stores instinctive ‘muscle memory’ to move around,” wrote its creator, Rongzhong Li. “An optional AI chip, such as Raspberry Pi can be mounted on top of Nybble’s back, to help Nybble with perception and decision. You can program in your favorite language, and direct Nybble walk around simply by sending short commands, such as ‘walk’ or ‘turn left.'”

The cat is surprisingly cute and the life-like movements make it look far more sophisticated than your average toy. You can get a single Nybble for $200 and the team aims to ship in April 2019. You also can just build your own cat for free if you have access to a laser cutter and a few other tools, but the kit itself includes a motion board and complete instructions, which makes the case for paying for a new Nybble pretty compelling. I, for one, welcome our robotic feline overlords.

24 Oct 22:01

Facebook confirms it’s building augmented reality glasses

by Josh Constine

“Yeah! Well of course we’re working on it,” Facebook’s head of augmented reality Ficus Kirkpatrick told me when I asked him at TechCrunch’s AR/VR event in LA if Facebook was building AR glasses. “We are building hardware products. We’re going forward on this . . . We want to see those glasses come into reality, and I think we want to play our part in helping to bring them there.”

This is the clearest confirmation we’ve received yet from Facebook about its plans for AR glasses. The product could be Facebook’s opportunity to own a mainstream computing device on which its software could run after a decade of being beholden to smartphones built, controlled and taxed by Apple and Google.

This month, Facebook launched its first self-branded gadget out of its Building 8 lab, the Portal smart display, and now it’s revving up hardware efforts. For AR, Kirkpatrick told me, “We have no product to announce right now. But we have a lot of very talented people doing really, really compelling cutting-edge research that we hope plays a part in the future of headsets.”

There’s a war brewing here. AR startups like Magic Leap and Thalmic Labs are starting to release their first headsets and glasses. Microsoft is considered a leader thanks to its early HoloLens product, while Google Glass is still being developed for the enterprise. And Apple has acquired AR hardware developers like Akonia Holographics and Vrvana to accelerate development of its own headsets.

Mark Zuckerberg said at F8 2017 that AR glasses were 5 to 7 years away

Technological progress and competition seems to have sped up Facebook’s timetable. Back in April 2017, CEO Mark Zuckerberg said, “We all know where we want this to get eventually, we want glasses,” but explained that “we do not have the science or technology today to build the AR glasses that we want. We may in five years, or seven years.” He explained that “We can’t build the AR product that we want today, so building VR is the path to getting to those AR glasses.” The company’s Oculus division had talked extensively about the potential of AR glasses, yet similarly characterized them as far off.

But a few months later, a Facebook patent application for AR glasses was spotted by Business Insider that detailed using “waveguide display with two-dimensional scanner” to project media onto the lenses. Cheddar’s Alex Heath reports that Facebook is working on Project Sequoia that uses projectors to display AR experiences on top of physical objects like a chess board on a table or a person’s likeness on something for teleconferencing. These indicate Facebook was moving past AR research.

Facebook AR glasses patent application

Last month, The Information spotted four Facebook job listings seeking engineers with experience building custom AR computer chips to join the Facebook Reality Lab (formerly known as Oculus research). And a week later, Oculus’ Chief Scientist Michael Abrash briefly mentioned amidst a half-hour technical keynote at the company’s VR conference that “No off the shelf display technology is good enough for AR, so we had no choice but to develop a new display system. And that system also has the potential to bring VR to a different level.”

But Kirkpatrick clarified that he sees Facebook’s AR efforts not just as a mixed reality feature of VR headsets. “I don’t think we converge to one single device . . . I don’t think we’re going to end up in a Ready Player One future where everyone is just hanging out in VR all the time,” he tells me. “I think we’re still going to have the lives that we have today where you stay at home and you have maybe an escapist, immersive experience or you use VR to transport yourself somewhere else. But I think those things like the people you connect with, the things you’re doing, the state of your apps and everything needs to be carried and portable on-the-go with you as well, and I think that’s going to look more like how we think about AR.”

Oculus Chief Scientist Michael Abrash makes predictions about the future of AR and VR at the Oculus Connect 5 conference

Oculus virtual reality headsets and Facebook augmented reality glasses could share an underlying software layer, though, which might speed up engineering efforts while making the interface more familiar for users. “I think that all this stuff will converge in some way maybe at the software level,” Kirkpatrick said.

The problem for Facebook AR is that it may run into the same privacy concerns that people had about putting a Portal camera inside their homes. While VR headsets generate a fictional world, AR must collect data about your real-world surroundings. That could raise fears about Facebook surveilling not just our homes but everything we do, and using that data to power ad targeting and content recommendations. This brand tax haunts Facebook’s every move.

Startups with a cleaner slate like Magic Leap and giants with a better track record on privacy like Apple could have an easier time getting users to put a camera on their heads. Facebook would likely need a best-in-class gadget that does much that others can’t in order to convince people it deserves to augment their reality.

You can watch our full interview with Facebook’s director of camera and head of augmented reality engineering Ficus Kirkpatrick from our TechCrunch Sessions: AR/VR event in LA:

17 Oct 19:19

Yes We hack référencé par le Market Guide Gartner

by Damien Bancal

Pour une bonne nouvelle, c’est une bonne nouvelle ! Les copains de chez Yes We Hack viennent de rentrer dans le fameux “Market Guide” de Gartner. Gartner Inc. est une entreprise américaine de conseil et de recherche dans le domaine des techniques avancées. L’Américain a référ...

Cet article Yes We hack référencé par le Market Guide Gartner est apparu en premier sur ZATAZ.

17 Oct 18:54

Introducing GitHub Actions

by Sarah Drasner

It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you're not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment... all in one place.

Previously, there were only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

But now, enter GitHub Actions.

Actions are small bits of code that can be run off of various GitHub events, the most common of which is pushing to master. But it's not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit... you name it) just to name a couple.

But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more... the sky's the limit.

You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

Setting up your first action

There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

A screenshot of the GitHub Actions beta site showing a large blue button to click to join the beta.
The GitHub Actions beta site.

Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

A screenshot of the sample repo showing the Actions tab in the menu.

If I click on the Actions tab, this screen shows:

screen that shows

I click "Create a New Workflow," and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

new workflow

Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

show all of the action options in the sidebar

There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it's capable of so much more!

I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we'll walk through together.

shows options for azure in the sidebar

At the top where you see the heading "GitHub Action for Azure," there’s a "View source" link. That will take you directly to the repo that's used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the "uses" option in the Actions panel.

Here's a rundown of the options we're provided:

  • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what's creating the connection between them. This piece is abstracted away for you in the GUI, but you'll see in the next section that, if you're working in code, you'll need to keep the references the same to have the chaining work.
  • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
  • Args: This is what you’d expect — it allows you to pass arguments to the container.
  • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

Many of these actions have readmes that tell you what you need. The setup for "secrets" and "env" usually looks something like this:

action "deploy" {
  uses = ...
  secrets = [
    "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET",
  ]
}

You can also string multiple actions together in this GUI. It's very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

Writing an action in code

So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo's master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

Create the app services account

If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

First you'll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

brew update && brew install azure-cli

Then, we’ll log in to Azure by running:

az login

Now, we'll create a Service Principle by running:

az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

It will pass us this bit of output, that we'll use in creating our action:

{
  "appId": "APP_ID",
  "displayName": "ServicePrincipalName",
  "name": "http://ServicePrincipalName",
  "password": ...,
  "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}

What's in an action?

Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

workflow "Name of Workflow" {
  on = "push"
  resolves = ["deploy"]
}

action "deploy" {
  uses = "actions/someaction"
  secrets = [
    "TOKEN",
  ]
}

We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn't specify the order, but rather, is a full list of everything. You can see that we called the action following "deploy" — these strings need to match, that's how they are referencing one another.

Next, we'll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here's a list of all of them). But you can also use another person's repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

We may need some secrets or environment variables defined here and we would use them like this:

action "Deploy Webapp" {
  uses = ...
  args = "run some code here and use a $ENV_VARIABLE_NAME"
  secrets = ["SECRET_NAME"]
  env = {
    ENV_VARIABLE_NAME = "myEnvVariable"
  }
}

Creating a custom action

What we're going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we're taking that first base Azure action you saw in the GUI and building on top of it.

In entrypoint.sh:

#!/bin/sh

set -e

echo "Login"
az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}"

echo "Creating resource group ${APPID}-group"
az group create -n ${APPID}-group -l westcentralus

echo "Creating app service plan ${APPID}-plan"
az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE

echo "Creating webapp ${APPID}"
az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git

echo "Getting username/password for deployment"
DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv`
DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv`

git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}.scm.azurewebsites.net/${APPID}.git

git push azure master

A couple of interesting things to note about this file:

  • set -e in a shell script will make sure that if anything blows up the rest of the file doesn't keep evaluating.
  • The lines following "Getting username/password" look a little tricky — really what they're doing is extracting the username and password from Azure's publishing profiles. We can then use it for the following line of code where we add the remote.
  • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

Now we can work on our main.workflow file!

workflow "New workflow" {
  on = "push"
  resolves = ["Deploy to Azure"]
}

action "Deploy to Azure" {
  uses = "./.github/azdeploy"
  secrets = ["SERVICE_PASS"]
  env = {
    SERVICE_PRINCIPAL="http://sdrasApp",
    TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47",
    APPID="sdrasMoonshine"
  }
}

The workflow piece should look familiar to you — it's kicking off on push and resolves to the action, called "Deploy to Azure."

uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we'll configure this by going here and adding it, in settings:

adding a secret in settings

Finally, we have all of the environment variables we'll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you'd like to name it :)

You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn't work the same way anymore.

Here you can see everything deploying nicely, turning green, and we have our wonderful "Hello World" app that redeploys whenever we push to master 🎉

successful workflow showing green
Hello Work app screenshot

Game changing

GitHub actions aren't only about websites, though you can see how handy they are for them. It's a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that's hosted on Docker directly.

You also don't need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

All in all, it's pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you're hosting your code.

The post Introducing GitHub Actions appeared first on CSS-Tricks.

11 Oct 19:35

We’re DOOMED: Boston Dynamics’s Atlas Robot Starts Doing Parkour [Video]

by Geeks are Sexy

From Boston Dynamics:

Atlas does parkour. The control software uses the whole body including legs, arms and torso, to marshal the energy and strength for jumping over the log and leaping up the steps without breaking its pace. (Step height 40 cm.) Atlas uses computer vision to locate itself with respect to visible markers on the approach to hit the terrain accurately.

Scary, isn’t it?

[BostonDynamics]

The post We’re DOOMED: Boston Dynamics’s Atlas Robot Starts Doing Parkour [Video] appeared first on Geeks are Sexy Technology News.

09 Oct 22:19

The Latest Prostheses Take Orders Directly From Your Nerves

by Jon Christian

What Nerve

Disability rights advocate Nicole Kelly was born without her lower right arm, but using a cutting-edge prosthesis she got last year, she can now grind pepper, play cards, and open beers — just by thinking about the action.

Kelly’s is just one tale from a riveting new Wired story about the steady improvements in prostheses that take orders directly from users’ nerves. The big step forward: software that can make sense of the complex signals from a specific patient’s nervous system. We’ve written about similar systems before, but this report is a striking example of how the tech is already changing users’ lives.

Bear Arms

Wired talked to people using and testing prostheses containing control systems developed by Coapt and Infinite Biomedical Technologies (IBT). These systems pick up nerve signals via electrodes positioned on a user’s upper arm. The user then trains an algorithm to translate their body’s signals into natural motions.

Kelly’s prosthesis, which uses hardware and software made by Coapt, even has a “reset” button that lets her reboot the algorithm if it’s acting up and retrain it, a process that she says takes her just two minutes after about a year of practice.

C.R.E.A.M.

One problem is that the tech is still very expensive. Coapt’s system costs between $10,000 and $15,000, its CEO told Wired. Infinite’s site doesn’t include a price for its setup, which it says will go on sale later this month.

But then again, it’s hard to put a price tag on the satisfaction of cracking open your own beer.

READ MORE: BIONIC LIMBS ‘LEARN’ TO OPEN A BEER [Wired]

More on bionic limbs: A Neural Network, Connected to a Human Brain, Could Mean More Advanced Prosthetics

07 Oct 19:47

Protection des données : le RGPD va coûter cher à l’Europe, selon le patron du CES de Las Vegas

by La rédaction

Gary Shapiro estime que l'obsession de l'Union européenne sur la protection des données va nuire à l'économie numérique du Vieux Continent.

The post Protection des données : le RGPD va coûter cher à l’Europe, selon le patron du CES de Las Vegas appeared first on FrenchWeb.fr.

02 Oct 21:32

LEGO Forma, enfin des LEGO pour les adultes !

by Morgan Fromentin

Évidemment que les LEGO sont aussi pour les adultes ! Que ce soit bien clair ! Mais aujourd'hui, LEGO reconnait officiellement que les adultes peuvent s'amuser autant que les enfants. Et voici une toute nouvelle gamme spécialement pour eux !

Lire la suite

02 Oct 07:56

Il transforme les différentes races de chiens en personnages “médiéval-fantastique”

by Mélissa N.

Il existe dans le monde 342 races de chiens différentes. Et cette diversité a visiblement inspiré l’artiste dont nous allons vous parler aujourd’hui qui a décidé de rendre hommage à nos amis canins.

Originaire de Russie, l’artiste Nikita Orlov a eu l’idée absolument insolite d’imaginer différentes races de chiens comme s’il s’agissait de personnages issus d’un univers médiéval-fantastique. En s’inspirant du Centaure (une créature mythologique mi-homme mi-cheval), il a donc réinterprété les différentes figures médiévales connues (le chevalier, le sorcier, le garde, le samouraï…) avec différentes races de chiens comme le corgi, le husky ou encore le caniche. Bien entendu, il joue avec l’image et le caractère des différentes races pour y associer les personnages qui correspondent le mieux.

L’artiste explique :

Je me suis dit un jour qu’il serait amusant de transformer un chien en centaure. J’étais sûr que quelqu’un l’avait déjà fait, mais en cherchant sur Google, je n’en ai trouvé aucun. J’ai donc décidé de créer mes propres chiens-centaures en imaginant un monde fantastique où ces derniers pourraient exister.

À noter que l’artiste en a également fait des figurines qu’il commercialise sur son shop en ligne. On vous laisse découvrir ci-dessous ses chiens médiévaux surprenants et on vous invite à vous rendre sur son Instagram pour en savoir plus sur son art.

Garde Corgi

Crédits : Nikita Orlov

Husky barbare de Sibérie

Crédits : Nikita Orlov

Caniche sorcier

Crédits : Nikita Orlov

Chow-chow aubergiste

Crédits : Nikita Orlov

Chevalier Dobermann

Crédits : Nikita Orlov

Shiba Samouraï

Crédits : Nikita Orlov

Chihuahua marchand

Crédits : Nikita Orlov

Garde Lévrier greyhound

Crédits : Nikita Orlov

Moine Saint-Bernard

Crédits : Nikita Orlov

Épagneul amiral

Crédits : Nikita Orlov

Teckel inquisiteur

Crédits : Nikita Orlov

Bulldog gladiateur

Crédits : Nikita Orlov

Barde Akita

Crédits : Nikita Orlov

Dogue allemand magicien

Crédits : Nikita Orlov

Carlin arbalétrier

Crédits : Nikita Orlov

Berger allemand cavalier

Crédits : Nikita Orlov

Bonus #1 : Le bon garçon

Crédits : Nikita Orlov

Bonus #2 : Chien sauvage mort-vivant

Crédits : Nikita Orlov

Imaginé par : Nikita Orlov
Source : boredpanda.com

Cet article Il transforme les différentes races de chiens en personnages “médiéval-fantastique” provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

02 Oct 07:52

There’s a secret text adventure game hidden inside Google — here’s how to play it

by Greg Kumparak

Google loves a good Easter egg. There are dozens upon dozens of different eggs hidden across Google’s product portfolio, from using Google Search to flip a coin to exploring the Doctor’s TARDIS in Google Maps.

Think you’ve seen them all? A seemingly new egg has just been discovered: a playable text adventure game, hidden right within Google Search.

Here’s how to play it:

  1. Open Google.com in Chrome. (It might work in other browsers, but it was a bit glitchy when I tried it elsewhere.)
  2. Search for “text adventure” without the quotes.
  3. Open the JavaScript developer console by pushing Command+Option+J on a Mac, or Ctrl+Shift+J on Windows.
  4. You should see a prompt asking if you “Would like to play a game?” Type yes and push enter, and the game will start!

From here, you’ll be playing as the “big blue G” of the Google logo, searching for your fellow alphabetical pals one command at a time. Don’t expect to dump days into this one — it’s no Zork. But for a lil’ gag that managed to stay hidden for who-knows-how-long within one of the world’s most trafficked websites, it’s pretty neat.

(Shout out to redditor attempt_number_1 for being the first to spot this.)

24 Sep 20:52

Une ardoise magique pilotée par un Raspberry Pi

by Pierre Lecourt

Le principe d’une ardoise magique est assez simple, une poudre magnétique colle à une vitre la rendant opaque, en manipulant de boutons, l’un pour l’axe horizontal, l’autre pour l’axe vertical, on peut dessiner sur cette surface en enlevant cette poudre.

2018-09-24 18_22_19-minimachines.net

Deux axes pilotés par des boutons rotatifs, voilà une interface parfaite pour être pilotée par des moteurs pas à pas. Il suffit de donner des coordonnées d’origine à ces boutons, là où la tête qui écrit sur l’ardoise magique est dans un coin de la zone visible. Puis de vérifier le nombre de tours de bouton nécessaires pour atteindre le coin à la diagonale opposée. 

En divisant ensuite la largeur et la hauteur en éléments de mesure de manière à pouvoir coder une échelle de déplacement, il est alors possible de demander à une carte Raspberry Pi de piloter des moteurs pour faire avancer ou reculer les deux axes. Il ne reste plus qu’à injecter dans la carte une image simplifiée de manière à la reproduire en retranscrivant les traites en coordonnées pour les deux axes sur la surface pour voir s’animer l’ardoise et dessiner à votre place l’image voulue.

2018-09-24 17_37_45-minimachines.net

C’est très exactement comme cela que fonctionnent les CNC, ces machines qui découpent bois, métal, plastique avec une fraise ou un laser. Le Raspberry Pi se chargera de déterminer un chemin à effectuer pour relier les points identifiés du dessin de manière à le reproduite au mieux.  

Comme d’habitude, l’ensemble des éléments nécessaires pour le montage, le code nécessaire et l’ensemble des instructions sont disponibles sur la page du créateur de cette ardoise vraiment magique. Si je doute de l’intérêt du dispositif en tant que tel – bien qu’il soit possible d’imaginer des usages alternatifs comme l’affichage de messages ou d’autres éléments via l’ardoise – c’est un super support pour expliquer le fonctionnement de nombreuses machines modernes : Imprimants, CNC ou imprimante 3D par exemple.

Une ardoise magique pilotée par un Raspberry Pi © MiniMachines.net. 2017

19 Sep 20:33

Curve-Fitting

Cauchy-Lorentz: "Something alarmingly mathematical is happening, and you should probably pause to Google my name and check what field I originally worked in."
12 Sep 06:16

Dropbox may be adding an e-signature feature, user survey indicates

by Sarah Perez

A recent user survey sent out by Dropbox confirms the company is considering the addition of an electronic signature feature to its Dropbox Professional product, which it refers to simply as “E-Signature from Dropbox.” The point of the survey is to solicit feedback about how likely users are to use such a product, how often, and if they believe it would add value to the Dropbox experience, among other things.

While a survey alone doesn’t confirm the feature is in the works, it does indicate how Dropbox is thinking about its professional product.

According to the company’s description of E-Signature, the feature would offer “a simple, intuitive electronic signature experience for you and your clients” where documents could be sent to others to sign in “just a few clicks.”

The clients also wouldn’t have to be Dropbox users to sign, the survey notes. And the product would offer updates on every step of the signature workflow, including notifications and alerts about the document being opened, whether the client had questions, and when the document was signed. After the signed document is returned, the user would receive the executed copy saved right in their Dropbox account for easy access, the company says.

In addition to soliciting general feedback about the product, Dropbox also asked survey respondents about their usage of other e-signature brands, like Adobe e-Sign, DocuSign, HelloSign, and PandaDoc, as well as their usage other more traditional methods, like in-person signing and documents sent over mail.

Given the numerous choices on the market today, it’s unclear if Dropbox will choose to move forward and launch such a product. However, if it did, the benefit of having its own E-Signature service would be its ability to be more tightly integrated into Dropbox’s overall product experience. It could also push more business users to upgrade from a basic consumer account to the Professional tier.

This kind of direct integration would make sense in the context of Dropbox’s business workflows. If, for instance, a company is working on a contract workflow, being able to move to the signature phase without changing context (or to share with a user who doesn’t use Dropbox) could add tremendous value over and above simply storing the document.

Companies like Dropbox have been looking for ways to move beyond pure storage to give customers the ability to collaborate and share that content, particularly without forcing them to leave the application to complete a job. This ability to do work without task switching is something that Dropbox has been working on with Dropbox Paper.

While it remains to be seen how they would implement such a solution, it might be a case where it would make more sense to partner with existing vendors or buy a smaller player than it would be build such functionality from scratch — although it’s not clear from a simple survey what their ultimate goal would be at this point.

Dropbox has not yet responded to requests for comment.

06 Sep 16:29

BMW launches a personal voice assistant for its cars

by Frederic Lardinois

At TechCrunch Disrupt SF 2018, BMW today premiered its digital personal assistant for its cars, the aptly named BMW Intelligent Personal Assistant. But you won’t have to say “Hey, BMW Intelligent Personal Assistant” to wake it up. You can give it any name you want.

The announcement comes only a few weeks after BMW also launched its integration with Amazon’s Alexa, but it’s worth stressing that these are complementary technologies. BMW’s own assistant is all about your car, while its partnerships with Amazon and also Microsoft enables other functions that aren’t directly related to your driving experience.

“BMW’s Personal Assistant gets to know you over time with each of your voice commands and by using your car,” BMW’s senior vice president Digital Products and Services, Dieter May, said. “It gets better and better every single day.”

Sticking with the precedents of Microsoft’s, Google’s and Amazon’s assistants, the voice of BMW’s assistant is female (though BMW often uses male names and pronouns in its press materials). Over time, it’ll surely get more voices.

So what can the BMW assistant do? Once you are in a compatible car, you’ll be able to control all of the standard in-car features by voice. Think navigation and climate control (“Hey John, I’m cold”), or check the tire pressure, oil level and other engine settings.

You also can have some more casual conversations (“Hey Charlie, what’s the meaning of life?”), but what’s maybe more important is that the assistant will continuously learn more about you. Right now, the assistant can remember your preferred settings, but over time, it’ll learn more and even proactively suggest changes. “For example, driving outside the city at night, the personal assistant could suggest you the BMW High Beam Assist,” May noted.

In addition, you’ll also be able to use the assistant to learn more about your car’s features, something that’s getting increasingly hard as cars become computers on wheels with ever-increasing complexity.

BMW built the assistant on top of Microsoft’s Azure cloud and conversational technologies. Azure has long been BMW’s preferred public cloud and the two companies have had a close relationship for years now. BMW has, after all, also integrated some support for accessing Office 365 files and using Skype for Business in its cars, with support for Cortana likely coming soon, too.

That all sounds a bit confusing, though. Why have three assistants in the car, after all. All that “Hey Alexa,” “Hey Charlie,” “Hey Cortana” is going to get a bit confusing after all. But BMW argues that each one has a specialty. For Alexa that may be shopping while Cortana is all about getting work done and the BMW is all about your car. And if everything else fails, BMW’s existing concierge service is still there and lets you talk to a human.

The assistant feature will be available in a basic version with support for 23 languages and markets, starting March 2019. In the U.S., Germany, U.K., Italy, France, Spain, Switzerland, Austria, Brazil and Japan, the service will feature more features like support for weather search, point of interest search and access to music in March 2019. In those markets, the assistant will also feature a more natural voice. In China, this expanded version will go live a bit later and is currently scheduled for May 2019. In those markets, it’ll roll out to cars that support the BMW Operating System 7.0 as part of the company’s Live Cockpit Professional program.

If you order a BMW 3 Series, starting in November, the assistant will be available to you right away and included for the first three years of your ownership. For new X5, Z4 and 8 Series models, BMW Assistant support will arrive in the form of an over-the-air software upgrade starting in March 2019.

04 Sep 21:02

IFA 2018 – Deutsche Telekom dévoile son assistant personnel Magenta

by Valentine De Brye
 © Deutsche Telekom Alors que le salon berlinois touche à sa fin, l'opérateur allemand Deutsche Telekom a dévoilé au grand public son premier assistant domestique baptisé "Magenta". Ce dernier entend concurrencer les mastodontes du secteur, à savoir Google et Amazon et leurs assistants respectifs Home et Echo.  Cet appareil ressemble peu ou prou...
31 Aug 14:10

A l’IFA, les assistants vocaux se veulent indispensables

by La rédaction

Vitrines privilégiées de ces logiciels d'intelligence artificielle, plus de 100 millions d'enceintes connectées seront vendues d'ici fin 2018.

The post A l’IFA, les assistants vocaux se veulent indispensables appeared first on FrenchWeb.fr.

31 Aug 14:09

Huawei’s new Kirin 980 chip is so fast, it can probably slow down time

by Andy Boxall

Huawei has announced the Kirin 980 mobile processor, which promises a considerable increase in speed and efficiency over not only the old Kirin 970, but the Qualcomm Snapdragon 845 as well.

The post Huawei’s new Kirin 980 chip is so fast, it can probably slow down time appeared first on Digital Trends.

31 Aug 06:10

The Google Assistant is now bilingual 

by Frederic Lardinois

The Google Assistant just got more useful for multilingual families. Starting today, you’ll be able to set up two languages in the Google Home app and the Assistant on your phone and Google Home will then happily react to your commands in both English and Spanish, for example.

Today’s announcement doesn’t exactly come as a surprise, given that Google announced at its I/O developer conference earlier this year that it was working on this feature. It’s nice to see that this year, Google is rolling out its I/O announcements well before next year’s event. That hasn’t always been the case in the past.

Currently, the Assistant is only bilingual and it still has a few languages to learn. But for the time being, you’ll be able to set up any language pair that includes English, German, French, Spanish, Italian and Japanese. More pairs are coming in the future and Google also says it is working on trilingual support, too.

Google tells me this feature will work with all Assistant surfaces that support the languages you have selected. That’s basically all phones and smart speakers with the Assistant, but not the new smart displays, as they only support English right now.

While this may sound like an easy feature to implement, Google notes this was a multi-year effort. To build a system like this, you have to be able to identify multiple languages, understand them and then make sure you present the right experience to the user. And you have to do all of this within a few seconds.

Google says its language identification model (LangID) can now distinguish between 2,000 language pairs. With that in place, the company’s researchers then had to build a system that could turn spoken queries into actionable results in all supported languages. “When the user stops speaking, the model has not only determined what language was being spoken, but also what was said,” Google’s VP Johan Schalkwyk and Google Speech engineer Lopez Moreno write in today’s announcement. “Of course, this process requires a sophisticated architecture that comes with an increased processing cost and the possibility of introducing unnecessary latency.”

If you are in Germany, France or the U.K., you’ll now also be able to use the bilingual assistant on a Google Home Max. That high-end version of the Google Home family is going on sale in those countries today.

In addition, Google also today announced that a number of new devices will soon support the Assistant, including the tado° thermostats, a number of new security and smart home hubs (though not, of course, Amazon’s own Ring Alarm), smart bulbs and appliances, including the iRobot Roomba 980, 896 and 676 vacuums. Who wants to have to push a button on a vacuum, after all.

27 Aug 06:16

Deepfakes for dancing: you can now use AI to fake those dance moves you always wanted

by James Vincent

Artificial intelligence is proving to be a very capable tool when it comes to manipulating videos of people. Face-swapping deepfakes have been the most visible example, but new applications are being found every day. The latest? Call it deepfakes for dancing. It uses AI to read someone’s dance moves and copy them on to a target body.

The actual science here was done by a quartet of researchers from UC Berkley. As they describe in a paper posted on arXiV, their system is comprised of a number of discrete steps. First, a video of the target is recorded, and a sub-program turns their movements into a stick figure. (Quite of a lot of video is needed to get a good-quality transfer: around 20 minutes of footage at 120 frames per second.)...

Continue reading…

26 Aug 15:52

When Tim Burton Meets Superheroes [Gallery]

by Geeks are Sexy

We all know Tim Burton for his special dark and gothic style, but apart from his Batman movies (1989, 1992,) I’ve never seen the man create art that focuses on superheroes. But now, thanks to artist Andrew Tarusov, we now know what superheroes would look like when drawn using the style of “The Nightmare Before Christmas.) Check it out!

tim

tim1

tim1a

tim2

tim3

tim4

tim5

tim6

tim7

tim8

And here’s a video presenting some of the artist’s Disney-themed work:

[Source: Andrew Tarusov]

The post When Tim Burton Meets Superheroes [Gallery] appeared first on Geeks are Sexy Technology News.

25 Aug 21:42

Avengers Marvel Legends Series Infinity Gauntlet Articulated Electronic Fist

by Geeks are Sexy

Ever felt like you needed to have so much control over the Universe that with a swift snap of your fingers, you could wipe out half it’s population (dated that guy once…)? Well now you can. thanks to this amazing electronic, articulated Infinity gauntlet!  Let your inner Thanos rage out while you imagine your ennemies being vaporized into thin air!

-Articulated fingers with fist-lock display mode
-Movie-inspired sound effects
-Pulsating stone glow light effects
-Premium roleplay articulated electronic fist
-Collector-inspired attention to detail

[Avengers Marvel Legends Series Infinity Gauntlet Articulated Electronic Fist]

The post Avengers Marvel Legends Series Infinity Gauntlet Articulated Electronic Fist appeared first on Geeks are Sexy Technology News.

25 Aug 21:33

L'Oréal et ModiFace s'allient à Facebook dans la réalité augmentée

ModiFace, la société de réalité augmentée et d'intelligence artificielle récemment acquise par le groupe L'Oréal, annonce une collaboration de long terme avec Facebook afin de créer de nouvelles expériences de réalité augmentée intégrées dans "Facebook Camera".
17 Aug 17:17

Tesla’s Investors Make the Company. They May Also Ruin Its CEO.

by Victor Tangermann
In an interview with the New York Times, Elon Musk appeared to be often "choked up." Are short-sellers really getting to Tesla's CEO?

Elon Musk is unraveling before our very eyes. The sleepless nights he’s spent at the Tesla Gigafactory operating woefully behind schedule; his rather unpredictable behavior on Twitter that’s brought him, and his company, under federal scrutiny. It’s all making the CEO of one of the world’s most innovative car companies more of a liability than an asset.

Musk is giving us a better picture of the toll this has all taken on him. In a recent interview with the New York Times, Musk was frequently “choked up” when he told reporters about how his personal life has suffered because of his (admittedly ambitious) work. “There were times when I didn’t leave the factory for three or four days — days when I didn’t go outside,” he tells reporters over an hour long call. “This has really come at the expense of seeing my kids. And seeing friends.” And he’s still not getting enough sleep. “It is often a choice of no sleep or Ambien.”

The interview comes after a very rough couple of weeks for both Musk and his electric car company. To recap, Musk tweeted about plans for taking Tesla private at $420 a share (about a 20 percent bump over stock prices at the time), with “funding secured.”

That funding was supposed to come from a Saudi Arabian sovereign wealth fund, but Reuters revealed that the funding was anything but secure — the Saudis have “shown no interest so far in financing Tesla Inc,” despite acquiring a 5 percent stake earlier this year.

To add to Musk’s troubles, his rash tweets — that even surprised the company’s board and made stocks go crazy before they were frozen — landed him in hot water with the U.S. Securities and Exchange Commission, which is formally investigating whether Musk manipulated markets illegally.

Musk, though, pointed to a single source of all his suffering: it’s the short sellers. You may have heard of “buy low, sell high.” In short-selling, you are reversing this approach in day trading — you sell high and buy low. You borrow shares through a broker to sell high, then buying them back after the stock price falls. It only works when a company’s stock falls, though. So short-sellers only benefit when the company — and Musk himself — fails.

In his Times interview, Musk says he’s been suffering “at least a few months of extreme torture from the short-sellers, who are desperately pushing a narrative that will possibly result in Tesla’s destruction.”

Short-sellers have long been a thorn in Musk’s side. It’s not clear why they’re bothering Musk now more than before, but it might be because Tesla’s difficult year (Model 3 production delays and Model X crash that has dropped stock prices since March) made more short-sellers turn to Tesla, or because Musk’s own activities (say, a positive quarterly earnings call) have cost the short-sellers huge sums. And his often emotional response to short-sellers hasn’t encouraged them to back off.

Musk isn’t taking their attacks sitting down, though. Taking Tesla private would finally put an end to short-sellers — you can’t freely sell and buy private company stock — and allow Musk to finally get some much needed sleep.

But it’s not clear that Tesla will privatize anytime soon, and it won’t be easy if it does happen — the sheer uncertainty and SEC investigation are bound to scare off investors.

One way to pull it off? Hiring number two executive who could take some of that pressure off of his shoulders. An unstable Musk who “alternated between laughter and tears” might not be the most attractive to investors. But an even-keeled, level-headed, well-rested number two could help Musk.

While Musk says there’s “no active search right now” according to the interview, Tesla has tried to poach high-ranking executives in the past, including Facebook’s chief operating officer Sheryl Sandberg.

For now, it looks like Musk is staying in his job, despite speculation earlier this year that he’d be ousted (and, apparently, his own desire to leave): “If you have anyone who can do a better job, please let me know. They can have the job. Is there someone who can do the job better? They can have the reins right now,” Musk tells the Times.

But there are other ways Musk could rest easier at night. He could care less about those pesky short-sellers. Or he could heed the Tesla board and stop tweeting.

The post Tesla’s Investors Make the Company. They May Also Ruin Its CEO. appeared first on Futurism.

15 Aug 12:26

This robot maintains tender, unnerving eye contact

by Devin Coldewey

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.