Shared posts

16 Jul 11:15

Citroën lance des lunettes pour lutter contre le mal des transports

by Clément
Citroën vient en aide aux personnes qui sont malades en voiture et lancent les lunettes Seetroën qui, en plus d'offrir un magnifique jeu de mots, permettent d'éliminer le mal des transports.
13 Jul 19:26

A new digital picture frame is nearly indistinguishable from a real canvas

by Luke Dormehl

Digital picture frames are old news. But a new display has our attention, thanks to ambient light sensors and other smart tech that makes it indistinguishable from an actual framed work of art.

The post A new digital picture frame is nearly indistinguishable from a real canvas appeared first on Digital Trends.

03 Jul 19:38

Amazon is opening a second cashier-less Go store in Seattle this fall

by Chaim Gartenberg

Amazon is expanding its Go cashier-less supermarkets, with the company now confirming a second store coming to Seattle this fall, via a report from GeekWire.

The new location will continue the gradual roll out of Amazon’s experimental new retail stores, with locations also in the works for Chicago and San Francisco (although there’s no date yet on when those stores will be opening.) The new Seattle store is said to be almost twice as large as the current one, measuring in at 3,000 square feet, compared to the existing 1,800 square feet location.

The first Amazon Go location opened earlier this year, and has customers scan their phones when entering the store. Then, through the use of sensors and cameras, Amazon is able to automatically...

Continue reading…

29 Jun 11:53

Un étudiant invente un airbag pour smartphone

by Félix Mercadante

Pour beaucoup de monde, les smartphones sont devenu un outil essentiel dans la vie de tous les jours, presque indispensable. Alors, mieux vaut qu’il soit en bon état. Surtout au vu des prix qui ne font qu’augmenter au fur et à mesure des années. Si vous êtes souvent victime du cassage d’écran de votre smartphone, un étudiant allemand pourrait bien avoir trouvé la solution à vos problèmes. 

Philp Frenzel, étudiant ingénieur en mécatronique, à lui-même vécu cette mésaventure, après avoir jeté sa veste dans un escalier, son smarthpone tout neuf s’échappe de sa poche et se casse. Depuis, il cherche une solution pour éviter que ce genre d’incident se reproduise. Il alors une idée, un airbag pour smartphone. 

Votre smartphone ne touchera plus le sol

Le principe est assez simple, des lamelles sont insérées aux quatre coins du téléphone, lorsqu’elles détectent une chute, elles se déploient et empêche donc le téléphone de toucher le sol. Philip Frenzel a déjà déposé un brevet pour son invention , il ne lui manque plus que de l’argent pour financer sa commercialisation. 

 

L’article Un étudiant invente un airbag pour smartphone est apparu en premier sur GOLEM13.FR.

25 Jun 09:11

Tu t’y connais ?

by CommitStrip

25 Jun 09:10

AltspaceVR Debuts New Hangout Space, New Games, And Leadership Program

by David Jagneaux
AltspaceVR Debuts New Hangout Space, New Games, And Leadership Program

When AltspaceVR was shutting down, I felt a bit emotional about it. I’ve always loved MMOs, the communities they develop, and the people that play them, so when the social VR app, one of VR’s first big breakout pieces of software, was poised to end, it rocked me to my core.

I even edited together a video commemorating it and highlighting the final moments.

But, that end never truly came. Microsoft swooped in and saved the company. As a result, it gets to live on, thrive, and continue pushing out updates.

This week we learned about some ambitious new plans. First up is a brand new “Hangout space” in the game, dubbed Origins. Previously, the default hangout area was the Campfire, but the creators felt it was time for a new “sister” space as they call it. Both will coexist.

There will also be a new Community Leaders Program, so new AltspaceVR users can find experienced people easily to talk with and ask questions. All of the leaders will be wearing badges with a lightning bolt, so it’s always someone that you can trust.

Finally, there are two new game shows coming into the regular rotation: Tongue-Tied and Trivia:

Tongue-tied: Get the right word from your brain to your lips. Better be a quick thinker! Players compete to be the best at naming 6 items based on a category. The trick? You only have six seconds. Play now and see who is can think on their feet.

Trivia: Beat your friends to the answer in this fast paced trivia game between an audience of people. If you make it to the finals, you’ll face-off in a final game show on stage against the best of the best.

Do you still login to AltspaceVR? Let us know what you’ve been up to lately down in the comments below!

Tagged with: altspace, altspacevr

The post AltspaceVR Debuts New Hangout Space, New Games, And Leadership Program appeared first on UploadVR.

25 Jun 08:58

E-Dermis: Feeling At Your (Prosthetic) Fingertips

by Kristina Panos

When we lose a limb, the brain is really none the wiser. It continues to send signals out, but since they no longer have a destination, the person is stuck with one-way communication and a phantom-limb feeling. The fact that the brain carries on has always been promising as far as prostheses are concerned, because it means the electrical signals could potentially be used to control new limbs and digits the natural way.

A diagram of the e-dermis via Science Robotics.

It’s also good news for adding a sense of touch to upper-limb prostheses. Researchers at Johns Hopkins university have spent the last year testing out their concept of an e-dermis—a multi-layer approach to expanding the utility of artificial limbs that can detect the curvature and sharpness of objects.

Like real skin, the e-dermis has an outer, epidermal layer and an inner, dermal layer. Both layers use conductive and piezoresistive textiles to transmit information about tangible objects back to the peripheral nerves in the limb. E-dermis does this non-invasively through the skin using transcutaneous electrical nerve stimulation, better known as TENS. Here’s a link to the full article published in Science Robotics.

First, the researchers made a neuromorphic model of all the nerves and receptors that relay signals to the nervous system. To test the e-dermis, they used 3-D printed objects designed to be grasped between thumb and forefinger, and monitored the subject’s brain activity via EEG.

For now, the e-dermis is confined to the fingertips. Ideally, it would cover the entire prosthesis and be able to detect temperature as well as curvature. Stay tuned, because it’s next on their list.

Speaking of tunes, here’s a prosthetic arm that uses a neural network to achieve individual finger control and allows its owner to play the piano again.

Thanks for the tip, [Qes].

21 Jun 20:47

229° - Pull de randonnée Quechua NH150 - bleu clair (du S au 3XL)

3,99€ - Decathlon
» Retrait gratuit en magasin
» note 4.6 / 5 sur 57 avis

21 Jun 16:49

La différence entre un patron et un leader à travers 8 illustrations créatives

by Thomas R.

En Indonésie, la société Yukbisnis a réalisé une série d’illustrations très réussies qui montrent la différence entre un leader et un patron dans le style de management. Des dessins assez poétiques et toujours très justes !

Être un patron ou un leader, ce n’est pas tout à fait la même chose. Même si ces deux termes peuvent paraître semblables, ils induisent deux notions du management totalement éloignées. Car si dans l’imagerie populaire un patron va manager de manière directe, parfois tyrannique, et considère qu’il est dans une relation de domination vis à vis de son employé, le leader est, quant à lui, dans une démarche beaucoup plus participative. Il raisonne au nom du groupe et son objectif est clair : faire avancer la société tout en faisant progresser ses salariés.

Si on vous parle de tout ça, c’est parce qu’une société indonésienne, Yukbisnis, a révélé toute une série d’illustrations qui montrent parfaitement la différence entre un patron et un leader. Deux approches totalement éloignées du management que l’on vous propose de découvrir ci-dessous à travers différentes mises en scène qui vont forcément vous parler.

Profiter de quelqu’un vs Lui donner des responsabilités

Crédits : Yukbisnis


Punir vs Corriger

Crédits : Yukbisnis


Donner des ordres vs Encourager

Crédits : Yukbisnis


“Je” vs “Nous”

Crédits : Yukbisnis


Intimider vs Épauler

Crédits : Yukbisnis


Ordonner vs Questionner

Crédits : Yukbisnis


Montrer que c’est fait vs Montrer comment c’est fait

Crédits : Yukbisnis


S’attribuer le mérite vs Féliciter

Crédits : Yukbisnis


Imaginé par : Yukbisnis
Source : boredpanda.com

Cet article La différence entre un patron et un leader à travers 8 illustrations créatives provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

14 Jun 05:54

Amazon starts shipping its $249 DeepLens AI camera for developers

by Frederic Lardinois

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

12 Jun 20:17

Scotty Allen’s PCB Fab Tour is Like Willy Wonka’s for Hardware Geeks

by Mike Szczys

The availability of low-cost, insanely high-quality PCBs has really changed how we do electronics. Here at Hackaday we see people ditching home fabrication with increasing frequency, and going to small-run fab for their prototypes and projects. Today you can get a look at the types of factory processes that make that possible. [Scotty Allen] just published a (sponsored) tour of a PCB fab house that shows off the incredible machine tools and chemical baths that are never pondered by the world’s electronics consumers. If you have an appreciation PCBs, it’s a joy to follow a design through the process so take your coffee break and let this video roll.

Several parts of this will be very familiar. The photo-resist and etching process for 2-layer boards is more or less the same as it would be in your own workshop. Of course the panels are much larger than you’d ever try at home, and they’re not using a food storage container and homemade etchant. In fact the processes are by and large automated which makes sense considering the volume a factory like this is churning through. Even moving stacks of boards around the factory is show with automated trolleys.

Six headed PCB drilling machine (four heads in use here).

What we find most interesting about this tour is the multi-layer board process, the drilling machines, and the solder mask application. For boards that use more than two layers, the designs are built from the inside out, adding substrate and copper foil layers as they go. It’s neat to watch but we’re still left wondering how the inner layers are aligned with the outer. If you have insight on this please sound off in the comments below.

The drilling process isn’t so much a surprise as it is a marvel to see huge machines with six drill heads working on multiple boards at one time. It sure beats a Dremel drill press. The solder mask process is one that we don’t often see shown off. The ink for the mask is applied to the entire board and baked just to make it tacky. A photo process is then utilized which works much in the same way photoresist works for copper etching. Transparent film with patterns printed on it cures the solder mask that should stay, while the rest is washed away in the next step.

Boards continue through the process to get silk screen, surface treatment, and routing to separate individual boards from panels. Electrical testing is performed and the candy making PCB fab process is complete. From start to finish, seeing the consistency and speed of each step is very satisfying.

Looking to do a big run of boards? You may find [Brian Benchoff’s] panelization guide of interest.

08 Jun 20:33

Un prototype de solution Arduino flexible annoncé en vidéo

by Pierre Lecourt

La gamme de microcontrôleurs Arduino pourrait s’enrichir d’un nouveau modèle pour le moment toujours en développement. L’Arduino flexible a été présenté sous la forme d’un prototype développé par une société spécialisée baptisée NextFlex.

L’ensemble du circuit est ainsi imprimé avec une encre conductrice à base d’argent sur un support plastique souple d’un millimètre d’épaisseur, il reprend la même électronique qu’un circuit Arduino Mini classique mais peut se tordre et se plier pour épouser diverses formes et subir, ainsi, des contraintes mécaniques variées.

2018-06-08 16_53_55-minimachines.net

Seul élément solide de cet ensemble, le microcontrôleur lui même qui est « noyé » sous une couche de matière plastique translucide. Le « die » du microcontrôleur est mis à nu et ne mesure quasiment plus rien en épaisseur ce qui permet de l’intégrer facilement sur le support sans le rigidifier. Reste des résistances et des condensateurs qui demeurent « rigides » mais que NextFlex pense pouvoir imprimer à terme.

2018-06-08 17_01_02-minimachines.net

Les usages de ce type de solution Arduino flexible sont très variés. Rien que dans les projets que l’on m’a présenté par le passé, j’en vois plusieurs qui pourraient tirer avantage de ce type de développement. Une veste de motard qu’un lecteur développait et qui permettait d’afficher des messages sous la forme de LED embarquait une carte Arduino sous une coque de protection. Elle permettait d’afficher des message via des LEDs au dos de la veste mais également de signaler les actions du conducteur via un système de clignotant qui s’accordait à ceux de la moto et qui illuminait les flancs de la combinaison. Sur ce type de dispositif, une carte souple permettrait de ne pas avoir une plaque rigide mais de retrouver un peu plus de liberté de mouvement là où la puce devrait être embarquée.

Source : Arduino

Un prototype de solution Arduino flexible annoncé en vidéo © MiniMachines.net. 2017

04 Jun 18:55

DO NOT Buy This $100 Smart Lock...

DO NOT buy this $100 Smart Lock.....(Read...)

03 Jun 20:32

Microsoft has reportedly acquired GitHub

by Tom Warren

Microsoft has reportedly acquired GitHub, and could announce the deal as early as Monday. Bloomberg reports that the software giant has agreed to acquire GitHub, and that the company chose Microsoft partly because of CEO Satya Nadella. Business Insider first reported that Microsoft had been in talks with GitHub recently.

GitHub is a vast code repository that has become popular with developers and companies hosting their projects, documentation, and code. Apple, Amazon, Google, and many other big tech companies use GitHub. Microsoft is the top contributor to the site, and has more than 1,000 employees actively pushing code to repositories on GitHub. Microsoft even hosts its own original Windows File Manager source code on GitHub. The...

Continue reading…

26 May 21:08

GDPR

By clicking anywhere, scrolling, or closing this notification, you agree to be legally bound by the witch Sycorax within a cloven pine.
25 May 11:08

Elle réalise son autoportrait dans 50 styles de dessins animés différents

by Mélissa N.

L’illustratrice Sam Skinner s’est amusée à revisiter son propre autoportrait en s’inspirant du style de 50 dessins animés et bandes dessinées célèbres. Un gimmick désormais connu… mais toujours aussi distrayant et efficace !

Si vous repensez à votre enfance, il y a forcément un ou plusieurs dessins animés qui vous ont marqués. Pour Sam Skinner, l’artiste dont nous allons vous parler aujourd’hui, il y en a une bonne cinquantaine. À tout juste 24 ans, cette jeune créative maîtrise différents styles artistiques. Elle est passée maître dans l’art de réinterpéter le style des autres artistes… et vous allez très vite comprendre pourquoi.

Sam Skinner s’est amusée à revisiter son propre autoportrait dans le style de 50 dessins animés différents. Les Simpson, Futurama, Adventure Time, Totally Spies, American Dad… Tout y passe et on doit avouer qu’elle fait ça plutôt bien. “J’ai décidé de relever le défi parce que j’ai vu tellement de gens le faire en ligne que j’ai eu envie de le faire également” explique Sam Skinner. L’artiste avoue avoir réalisé ce projet pour améliorer ses compétences en dessin.

Crédits : Sam Skinner

En effet, son but ultime est de s’inspirer des plus grands pour trouver son propre style artistique et créer elle même sa propre bande dessinée. Et en attendant de voir son projet réalisé, on vous invite à vous rendre sur son compte Instagram pour en savoir plus sur l’artiste et son art. Dans le même esprit, vous pouvez également découvrir le projet de l’illustrateur Kells O’Hickey qui s’est amusé à revisiter ses photos de couple dans 10 styles dessins animés différents.

Les Super Nanas

Crédits : Sam Skinner


Adventure Time

Crédits : Sam Skinner


Tim Burton

Crédits : Sam Skinner


Sailor Moon

Crédits : Sam Skinner


Archer

Crédits : Sam Skinner


Gorillaz

Crédits : Sam Skinner


Disney

Crédits : Sam Skinner


The Simpson

Crédits : Sam Skinner


Bob l’Éponge

Crédits : Sam Skinner


Le Laboratoire de Dexter

Crédits : Sam Skinner


Avatar

Crédits : Sam Skinner


Futurama

Crédits : Sam Skinner


South Park

Crédits : Sam Skinner


Scooby-Doo

Crédits : Sam Skinner


Danny Fantôme

Crédits : Sam Skinner


Peanuts

Crédits : Sam Skinner


Souvenirs de Gravity Falls

Crédits : Sam Skinner


Mes Parrains Sont Magiques

Crédits : Sam Skinner


Studio Ghibli

Crédits : Sam Skinner


Rick et Morty

Crédits : Sam Skinner


Family Guy

Crédits : Sam Skinner


Bob’s Burgers

Crédits : Sam Skinner


Minecraft

Crédits : Sam Skinner


Super Mario Bros

Crédits : Sam Skinner


Phinéas et Ferb

Crédits : Sam Skinner


Kim Possible

Crédits : Sam Skinner


Happy Tree Friends

Crédits : Sam Skinner


Teen Titans Go

Crédits : Sam Skinner


Totally Spies

Crédits : Sam Skinner


Winx Club

Crédits : Sam Skinner


Hey Arnold

Crédits : Sam Skinner


Monster High

Crédits : Sam Skinner


Pokémon

Crédits : Sam Skinner


Garfield

Crédits : Sam Skinner


Sonic

Crédits : Sam Skinner


American Dad

Crédits : Sam Skinner


Justice League

Crédits : Sam Skinner


Nom de Code : Kids Next Door

Crédits : Sam Skinner


L’Île des Défis Extrêmes

Crédits : Sam Skinner


Naruto

Crédits : Sam Skinner


Yu-Gi-Oh!

Crédits : Sam Skinner


Jenny Robot

Crédits : Sam Skinner


6teen

Crédits : Sam Skinner


Samurai Jack

Crédits : Sam Skinner


Invader Zim

Crédits : Sam Skinner


Johnny Test

Crédits : Sam Skinner


One Piece

Crédits : Sam Skinner


Code Lyoko

Crédits : Sam Skinner


Juniper Lee

Crédits : Sam Skinner


Imaginé par : Sam Skinner
Source : boredpanda.com

Cet article Elle réalise son autoportrait dans 50 styles de dessins animés différents provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

21 May 08:16

Hands-on with the RED Hydrogen One, a wildly ambitious smartphone

by Dieter Bohn

Come for the holograms, stay for the modules

Continue reading…

21 May 08:10

Tiny Sideways Tetris on a Business Card

by Donald Papp

Everyone recognizes Tetris, even when it’s tiny Tetris played sideways on a business card. [Michael Teeuw] designed these PCBs and they sport small OLED screens to display contact info. The Tetris game is actually a hidden easter egg; a long press on one of the buttons starts it up.

It turns out that getting a playable Tetris onto the ATtiny85 microcontroller was a challenge. Drawing lines and shapes is easy with resources like TinyOLED or Adafruit’s SSD1306 library, but to draw those realtime graphics onto the 128×32 OLED using that method requires a buffer size that wouldn’t fit the ATtiny85’s available RAM.

To solve this problem, [Michael] avoids the need for a screen buffer by calculating the data to be written to the OLED on the fly. In addition, the fact that the smallest possible element is a 4×4 pixel square reduces the overall memory needed to track the screen contents. As a result, the usual required chunk of memory to use as a screen buffer is avoided. [Michael] also detailed the PCB design and board assembly phases for those of you interested in the process of putting together the cards using a combination of hot air reflow and hand soldering.

PCB business cards showcase all kinds of cleverness. The Magic 8-Ball Business Card is refreshingly concise, and the project that became the Arduboy had milled cutouts to better fit components, keeping everything super slim.

21 May 08:03

Microsoft acquires conversational AI startup Semantic Machines to help bots sound more lifelike

by Catherine Shu

Microsoft announced today that it has acquired Semantic Machines, a Berkeley-based startup that wants to solve one of the biggest challenges in conversational AI: making chatbots sound more human and less like, well, bots.

In a blog post, Microsoft AI & Research chief technology officer David Ku wrote that “with the acquisition of Semantic Machines, we will establish a conversational AI center of excellence in Berkeley to push forward the boundaries of what is possible in language interfaces.”

According to Crunchbase, Semantic Machines was founded in 2014 and raised about $20.9 million in funding from investors including General Catalyst and Bain Capital Ventures.

In a 2016 profile, co-founder and chief scientist Dan Klein told TechCrunch that “today’s dialog technology is mostly orthogonal. You want a conversational system to be contextual so when you interpret a sentence things don’t stand in isolation.” By focusing on memory, Semantic Machines claims its AI can produce conversations that not only answer or predict questions more accurately, but also flow naturally, something that Siri, Google Assistant, Alexa, Microsoft’s own Cortana and other virtual assistants still struggle to accomplish.

Instead of building its own consumer products, Semantic Machines focused on enterprise customers. This means it will fit in well with Microsoft’s conversational AI-based products. These include Microsoft Cognitive Services and Azure Bot Service, which the company says are used by one million and 300,000 developers, respectively, and its virtual assistants Cortana and Xiaolce.

15 May 20:16

What Is Electro Swing?

Electro Swing Explained in 2 minutes...(Read...)

11 May 21:21

No one knows how Google Duplex will work with eavesdropping laws

by Sarah Jeong

In Google’s demonstration of its new AI assistant Duplex this week, the voice assistant calls a hair salon to book an appointment, carrying on a human-seeming conversation, with the receptionist at the other end seemingly unaware that she is speaking to an AI. Robots don’t literally have ears, and in order to “hear” and analyze the audio coming from the other end, the conversation is being recorded. But about a dozen states — including California — require everyone in the phone call to consent before a recording can be made.

It’s not clear how these eavesdropping laws affect Google Duplex. In fact, it’s so unclear that we can’t get a straight answer out of Google.

A Google spokesperson told The Verge during I/O, where Duplex was...

Continue reading…

11 May 16:05

We’re DOOMED: Boston Dynamics’ Atlas Robot Can Now Run Outside and Jump Autonomously! [Video]

by Geeks are Sexy

We’re doomed. DOOMED. Watch:

Atlas is the latest in a line of advanced humanoid robots we are developing. Atlas’ control system coordinates motions of the arms, torso and legs to achieve whole-body mobile manipulation, greatly expanding its reach and workspace. Atlas’ ability to balance while performing tasks allows it to work in a large volume while occupying only a small footprint. The Atlas hardware takes advantage of 3D printing to save weight and space, resulting in a remarkable compact robot with high strength-to-weight ratio and a dramatically large workspace. Stereo vision, range sensing and other sensors give Atlas the ability to manipulate objects in its environment and to travel on rough terrain. Atlas keeps its balance when jostled or pushed and can get up if it tips over.

I now want to see the Atlas robot running after Spotmini, Boston Dynamics’ dog robot.

Oh, and speaking of Spotmni, it can now “navigate” autonomously:

[BostonDynamics]

The post We’re DOOMED: Boston Dynamics’ Atlas Robot Can Now Run Outside and Jump Autonomously! [Video] appeared first on Geeks are Sexy Technology News.

09 May 21:49

Introducing ML Kit

by Google Devs

Posted by Brahim Elbouchikhi, Product Manager

In today's fast-moving world, people have come to expect mobile apps to be intelligent - adapting to users' activity or delighting them with surprising smarts. As a result, we think machine learning will become an essential tool in mobile development. That's why on Tuesday at Google I/O, we introduced ML Kit in beta: a new SDK that brings Google's machine learning expertise to mobile developers in a powerful, yet easy-to-use package on Firebase. We couldn't be more excited!



Machine learning for all skill levels

Getting started with machine learning can be difficult for many developers. Typically, new ML developers spend countless hours learning the intricacies of implementing low-level models, using frameworks, and more. Even for the seasoned expert, adapting and optimizing models to run on mobile devices can be a huge undertaking. Beyond the machine learning complexities, sourcing training data can be an expensive and time consuming process, especially when considering a global audience.

With ML Kit, you can use machine learning to build compelling features, on Android and iOS, regardless of your machine learning expertise. More details below!

Production-ready for common use cases

If you're a beginner who just wants to get the ball rolling, ML Kit gives you five ready-to-use ("base") APIs that address common mobile use cases:

  • Text recognition
  • Face detection
  • Barcode scanning
  • Image labeling
  • Landmark recognition

With these base APIs, you simply pass in data to ML Kit and get back an intuitive response. For example: Lose It!, one of our early users, used ML Kit to build several features in the latest version of their calorie tracker app. Using our text recognition based API and a custom built model, their app can quickly capture nutrition information from product labels to input a food's content from an image.

ML Kit gives you both on-device and Cloud APIs, all in a common and simple interface, allowing you to choose the ones that fit your requirements best. The on-device APIs process data quickly and will work even when there's no network connection, while the cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give a higher level of accuracy.

See these APIs in action on your Firebase console:

Heads up: We're planning to release two more APIs in the coming months. First is a smart reply API allowing you to support contextual messaging replies in your app, and the second is a high density face contour addition to the face detection API. Sign up here to give them a try!

Deploy custom models

If you're seasoned in machine learning and you don't find a base API that covers your use case, ML Kit lets you deploy your own TensorFlow Lite models. You simply upload them via the Firebase console, and we'll take care of hosting and serving them to your app's users. This way you can keep your models out of your APK/bundles which reduces your app install size. Also, because ML Kit serves your model dynamically, you can always update your model without having to re-publish your apps.

But there is more. As apps have grown to do more, their size has increased, harming app store install rates, and with the potential to cost users more in data overages. Machine learning can further exacerbate this trend since models can reach 10's of megabytes in size. So we decided to invest in model compression. Specifically, we are experimenting with a feature that allows you to upload a full TensorFlow model, along with training data, and receive in return a compressed TensorFlow Lite model. The technology behind this is evolving rapidly and so we are looking for a few developers to try it and give us feedback. If you are interested, please sign up here.

Better together with other Firebase products

Since ML Kit is available through Firebase, it's easy for you to take advantage of the broader Firebase platform. For example, Remote Config and A/B testing lets you experiment with multiple custom models. You can dynamically switch values in your app, making it a great fit to swap the custom models you want your users to use on the fly. You can even create population segments and experiment with several models in parallel.

Other examples include:

Get started!

We can't wait to see what you'll build with ML Kit. We hope you'll love the product like many of our early customers:

Get started with the ML Kit beta by visiting your Firebase console today. If you have any thoughts or feedback, feel free to let us know - we're always listening!

09 May 21:47

Microsoft’s Snip Insights puts A.I. technology into a screenshot-taking tool

by Sarah Perez

A team of Microsoft interns have thought up a new way to put A.I. technology to work – in a screenshot snipping tool. Microsoft today is launching their project, Snip Insights, a Windows desktop app that lets you retrieve intelligent insights – or even turn a scan of a textbook or report into an editable document – when you take a screenshot on your PC.

The team’s manager challenged the interns to think up a way to integrate A.I. into a widely used tool, used by millions.

They decided to try a screenshotting tool, like the Windows Snipping Tool or Snip, a previous project from Microsoft’s internal incubator, Microsoft Garage. The team went with the latter, because it would be easier to release as an independent app.

Their new tool leverages Cloud AI services in order to do more with screenshots – like convert images to translated text, automatically detect and tag image content, and more.

For example, you could screenshot a photo of a great pair of shoes you saw on a friend’s Facebook page, and the tool could search the web to help you find where to buy them. (This part of its functionality is similar to what’s already offered today by Pinterest). 

The tool can also take a scanned image of a document, and turn a screenshot of that into editable text.

And it can identify famous people, places or landmarks in the images you capture with a screenshot.

Although it’s a relatively narrow use case for A.I., the Snip Insights tool is an interesting example of how A.I. technology can be integrated into everyday productivity tools – and the potential that lies ahead as A.I. becomes a part of even simple pieces of software.

The tool is being released as Microsoft Garage project, but it’s open-sourced.

The Snip Insights GitHub repository will be maintained by the Cloud AI team going forward.

08 May 06:35

Microsoft launches Project Brainwave, its deep learning acceleration platform

by Frederic Lardinois

Microsoft today announced at its Build conference the preview launch of Project Brainwave, its platform for running deep learning models in its Azure cloud and on the edge in real time.

While some of Microsoft’s competitors, including Google, are betting on custom chips, Microsoft continues to bet on FPGAs to accelerate its models, and Brainwave is no exception. Microsoft argues that FPGAs give it more flexibility than designing custom chips and that the performance it achieves on standard Intel Stratix FPGAs is at least comparable to that of custom chips.

Last August, the company first detailed some aspects of BrainWave, which consists of three distinct layers: a high-performance distributed architecture; a hardware deep neural networking engine that has been synthesized onto the FPGAs; and a compiler and runtime for deploying the pre-trained models.

Microsoft is attaching the FPGAs right to its overall data center network, which allows them to become something akin to hardware microservices. The advantage here is high throughput and a large latency reduction because this architecture allows Microsoft to bypass the CPU of a traditional server to talk directly to the FPGAs.

When Microsoft first announced BrainWave, the software stack supported both the Microsoft Cognitive Toolkit and Google’s TensorFlow frameworks.

Brainwave is now in preview on Azure and Microsoft also promises to bring support for it to Azure Stack and the Azure Data Box appliance.

08 May 06:35

Microsoft Kinect lives on as a new sensor package for Azure

by Frederic Lardinois

Microsoft’s Kinect motion-sensing camera for its Xbox consoles was walking dead for the longest time. Last October, it finally passed away peacefully — or so we thought. At its Build developer conference today, Microsoft announced that it is bringing back the Kinect brand and its standout time-of-flight camera tech, but not for a game console. Instead, the company announced Project Kinect for Azure, a new package of sensors that combines the Kinect camera with an onboard computer and a small package that developers can integrate into their own projects.

The company says that Project Kinect for Azure can handle fully articulated hand tracking and that it can be used for high-fidelity spatial mapping. Based on these capabilities, it’s easy to imagine the use of Project Kinect for many robotics and surveillance applications.

“Project Kinect for Azure unlocks countless new opportunities to take advantage of Machine Learning, Cognitive Services and IoT Edge,” Microsoft’ technical fellow — and father of the HoloLens — Alex Kipman writes today. “We envision that Project Kinect for Azure will result in new AI solutions from Microsoft and our ecosystem of partners, built on the growing range of sensors integrating with Azure AI services.”

The camera will have a 1024×1024 resolution, the company says, and it’ll also use this same camera in the next generation of its HoloLens helmet. 

“Project Kinect for Azure brings together this leading hardware technology with Azure AI to empower developers with new scenarios for working with ambient intelligence,” Microsoft explains in today’s announcement. And indeed, it looks like the main idea here is to combine the company’s camera tech with its cloud-based machine learning tools — the pre-build and customized models from the Microsoft Cognitive Services suite and the IoT Edge platform for edge computing workloads.

06 May 20:31

Make Your Very Own Infinity Gauntlet at Home [Video]

by Geeks are Sexy

Here’s a tutorial on how to make your very own light-up foam Infinity Gauntlet! Cosplayer and artist Hendo made hers using a pair of Darth Vader gloves from Amazon. Check it out!

[Hendo Art]

The post Make Your Very Own Infinity Gauntlet at Home [Video] appeared first on Geeks are Sexy Technology News.

06 May 17:04

We Now Have A Working Nuclear Reactor for Other Planets — But No Plan For Its Waste

by Claudia Geib

If the power goes out in your home, you can usually settle in with some candles, a flashlight, and a good book. You wait it out, because the lights will probably be back on soon.

But if you’re on Mars, your electricity isn’t just keeping the lights on — it’s literally keeping you alive. In that case, a power outage becomes a much bigger problem.

NASA scientists think they’ve found a way to avoid that possibility altogether: creating a nuclear reactor. This nuclear reactor, known as Kilopower, is about the size of a refrigerator and can be safely launched into space alongside any celestial voyagers; astronauts can start it up either while they’re still in space, or after landing on an extraterrestrial body.

The Kilopower prototype just aced a series of major tests in Nevada that simulated an actual mission, including failures that could have compromised its safety (but didn’t).

This nuclear reactor would be a “game changer” for explorers on Mars, Lee Mason, NASA Space Technology Mission Directorate (STMD) principal technologist for Power and Energy Storage, said in a November 2017 NASA press release. Just one device could provide enough power to support an extraterrestrial outpost for 10 years, and do so without some of the issues inherent to solar power, namely: being interrupted at night or blocked for weeks or months during Mars’ epic dust storms.

“It solves those issues and provides a constant supply of power regardless of where you are located on Mars,” Mason said in the press release. He also noted that a nuclear-powered habitat could mean that humans could land in a greater number of landing sites on Mars, including high latitudes where there’s not much light but potentially lots of ice for astronauts to use.

Nuclear reactors are not an unusual feature in space; the Voyager 1 and 2 spacecraft, now whizzing through deep space after departing our solar system, have been running on nuclear energy since they launched in the 1970s. The same is true for the Mars rover Curiosity since it landed on the Red Planet in 2012.

But we’d need a lot more reactors to colonize planets. And that could pose a problem of what to do with the waste.

An artist's depiction of the Kilopower nuclear reactor, with a mushroom-like shape, working in concert with others on a reddish surface, presumably Mars.
An artist’s depiction of a series of Kilopower reactors, working in concert to power an extraterrestrial outpost. (Image Credit: NASA)

According to Popular Mechanics, Kilopower reactors create electricity through active nuclear fission — in which atoms are cleaved apart to release energy. You need solid uranium-235 to do it, which is housed in a reactor core about the size of a roll of paper towels. Eventually, that uranium-235 is going to be “spent,” just like fuel rods in Earth-based reactors, and put nearby humans at risk.

When that happens, the uranium core will have to be stored somewhere safe; spent reactor fuel is still dangerously radioactive, and releases lots of heat. On Earth, most spent fuel rods stored in pools of water that keep the rods cool, preventing them from catching fire and blocking radiating radioactivity. But on another planet, we’d need any available water to, you know, keep humans alive.

So we’d need another way to cool spent radioactive fuel. It’s possible that the spent fuel could be stored in shielded casks in lava tubes or designated parts of the surface, since the Moon and Mars are so cold, though that introduces the risk that someone might accidentally bump into them.

Right now, all we can do is speculate — as far as we know, NASA doesn’t have any publicly available plan for what to do with spent nuclear fuel on extraterrestrial missions. That could be because the Kilopower prototype just proved itself actually feasible. But not knowing what to do with the waste from it seems like an unusual oversight, since NASA is planning to go back to the Moon, and then to Mars, by the early 2030s.

And in case you were wondering, no, you can’t just shoot the nuclear waste off into deep space or into the sun; NASA studied that way back in the 1970s and determined it was a pretty terrible idea. Back to the drawing board.

The post We Now Have A Working Nuclear Reactor for Other Planets — But No Plan For Its Waste appeared first on Futurism.

02 May 07:09

A Telepresence System That’s Starting To Feel Like A Holodeck

by James Hobson

[Dr. Roel Vertegaal] has led a team of collaborators from [Queen’s University] to build TeleHuman 2 — a telepresence setup that aims to project your actual-size likeness in 3D.

Developed primarily for business videoconferencing, the setup requires a bit of space on both ends of the call. A ring of stereoscopic z-cameras capture the subject from all angles which the corresponding projector on the other end displays. Those projectors are arranged in similar halo above a human-sized, retro-reflective cylindrical screen which can be walked around — viewing the image from any angle without a VR headset or glasses — in real-time!

One of the greatest advantages of this method is that ring of projectors makes your likeness viewable to a group since the image isn’t locked to the perspective of one individual. Conceivably, this also allows for one person to be in many places at once — so to speak. The team argues that as body language is an integral part of communication, this telepresence method will ultimately bring long-distance interactions closer the feeling of being a face-to face.

It would be awesome to see this technology develop further, where the cameras and projectors could allow the user an area of free movement — for, say, more sweeping gestures or pacing about — on both ends of the call to really sell the illusion of the person being in the same physical space.

Ok that might be a little far-fetched for now. We haven’t advanced to holodeck tech quite yet, but we’re getting there.

[Thanks for the tip, Qes!]

30 Apr 19:38

Lire un article sur mobile en 2018

by CommitStrip