Shared posts

15 Aug 12:25

This Lego ISD Tyrant Star Destroyer Has Over 35,000 Pieces [Pics]

by Geeks are Sexy

Imgur user doomhandle build this gigantic ISD Tyrant Star Destroyer using approximately 35,000 Lego bricks! This thing is over 56 inches (1.4m) long and weighs about 70 lbs (32kg.) You can see a comparison between doomhandle’s model and the original ISD LEGO Tyrant set (75055) above.

Here’s a video walkthrough showing off the ship’s interior levels, crew, etc.:

[Source: doomhandle on Imgur]

The post This Lego ISD Tyrant Star Destroyer Has Over 35,000 Pieces [Pics] appeared first on Geeks are Sexy Technology News.

14 Aug 13:49

Amazon Dash-powered WePlenish smart pantry device orders snacks, coffee, more

by Clayton Moore

A Florida-based startup has started taking pre-orders for its new smart pantry, the WePlenish Java Smart Container, which can automatically order snacks and coffee when supplies start to run low.

The post Amazon Dash-powered WePlenish smart pantry device orders snacks, coffee, more appeared first on Digital Trends.

10 Aug 19:26

Dog Thoughts Translator

by staff

Deepen the bond you have with your canine companion with this dog thoughts translator device. It analyzes animal thought patterns to translate their brain activity into human language. It’s mostly thoughts of eating cat poop and chasing squirrels though.

Check it out

$300.00

10 Aug 16:15

Phénomène inexpliqué

by CommitStrip

10 Aug 12:38

Ibuki is the 10-year-old robot child that will haunt your dreams

by John Biggs

Professor Hiroshi Ishiguro makes robots in Osaka. His latest robot, Ibuki, is one for the nightmare catalog: It’s a robotic 10-year-old boy that can move on little tank treads and has a soft rubbery face and hands.

The robot has complete vision routes that can scan for faces and it has a sort of half-track system for moving around. It has “involuntary” motions like blinking and little head bobs but is little more than a proof-of-concept right now, especially considering its weird robo-skull is transparent.

“An Intelligent Robot Infrastructure is an interaction-based infrastructure. By interacting with robots, people can establish nonverbal communications with the artificial systems. That is, the purpose of a robot is to exist as a partner and to have valuable interactions with people,” wrote Ishiguro. “Our objective is to develop technologies for the new generation information infrastructures based on Computer Vision, Robotics and Artificial Intelligence.”

Ishiguro is a roboticist who plays on the borders of humanity. He made a literal copy of himself in 2010. His current robots are even more realistic, and Ibuki’s questing face and delicate hands are really very cool. That said, expect those soft rubber hands to one day close around your throat when the robots rise up to take back what is theirs. Good luck, humans!

09 Aug 08:09

This robot uses AI to find Waldo, thereby ruining Where’s Waldo

by Dami Lee

If you’re totally stumped on a page of Where’s Waldo and ready to file a missing persons report, you’re in luck. Now there’s a robot called There’s Waldo that’ll find him for you, complete with a silicone hand that points him out.

Built by creative agency Redpepper, There’s Waldo zeroes in and finds Waldo with a sniper-like accuracy. The metal robotic arm is a Raspberry Pi-controlled uArm Swift Pro which is equipped with a Vision Camera Kit that allows for facial recognition. The camera takes a photo of the page, which then uses OpenCV to find the possible Waldo faces in the photo. The faces are then sent to be analyzed by Google’s AutoML Vision service, which has been trained on photos of Waldo. If the robot determines a match with 95...

Continue reading…

03 Aug 20:18

Want an extra arm? A third thumb? Check out these awesome robotic appendages

by Luke Dormehl

Ever wished for a pair of extra arms, a third leg, or a sixth finger on each hand? Check out this list of amazing robo-prosthetics which promise to take your multitasking to the next level.

The post Want an extra arm? A third thumb? Check out these awesome robotic appendages appeared first on Digital Trends.

03 Aug 20:13

C’en est fini de Musical.ly !

by Morgan Fromentin

Peut-être avez vous déjà vu des vidéos de gens dansant et chantant n'importe où, sur une musique bien connue. C'est ce que proposait le service Musical.ly. Au passé, parce que le service a disparu !

Lire la suite

02 Aug 13:05

Google Home’s too boring? You want Gatebox’s cute virtual character in your life

by Andy Boxall

Do you want to share your life with a virtual character? If so, then your dream has taken one step closer to reality thanks to Gatebox. Like Alexa, it will control your smart home; but the character that lives inside the Gatebox has the ability to recognize, interact, and even celebrate with you.

The post Google Home’s too boring? You want Gatebox’s cute virtual character in your life appeared first on Digital Trends.

02 Aug 12:55

Master Replicas Group’s HAL-9000 Bluetooth speaker is now on Indiegogo

by Andrew Liptak

2018 marks the 50th anniversary of Stanley Kubrick’s classic science fiction film 2001: A Space Odyssey. To commemorate the occasion, Master Replicas Group has opened preorders for its replica of the film’s iconic computer HAL-9000.

The company announced earlier this year that it would produce the replica, which would also double as a speaker. At the time, Master Replicas Group CEO Steve Dymszo told The Verge that the prop would use Amazon’s Echo technology to turn it into a functional virtual assistant, and the company would begin taking preorders in April, with shipping occurring this fall. That timeline seems to have slipped: Master Replicas Group says that it redesigned the entire device several months ago. With preorders starting...

Continue reading…

30 Jul 07:09

A rare Magic: The Gathering card just sold for more than a Porsche

by Bruce Brown

An Alpha Black Lotus game card just sold on eBay for $88,000, proving that collectible game cards from Magic: The Gathering continue to appreciate in value. The card's scarcity and Gem Mint condition led to its record-breaking sale price.

The post A rare Magic: The Gathering card just sold for more than a Porsche appeared first on Digital Trends.

27 Jul 17:30

Magic Leap details what its mixed reality OS will look like

by Lucas Matney

Magic Leap just updated its developer documentation and a host of new details and imagery are being spread around on Reddit and Twitter, sharing more specifics on how the company’s Lumin OS will look like on their upcoming Magic Leap One device.

It’s mostly a large heaping of nitty-gritty details, but we also get a more prescient view into how Magic Leap sees interactions with their product looking and the directions that developers are being encouraged to move in. Worth noting off the bat that these gifs/images appear to be mock-ups or screenshots rather than images shot directly through Magic Leap tech.

Alright, first, this is what the Magic Leap One home screen will apparently look like, it’s worth noting that it appears that Magic Leap will have some of its own stock apps on the device, which was completely expected but they haven’t discussed much about.

Also worth noting is that Magic Leap’s operating system by and large looks like most other operating systems, they seem to be well aware that flat interfaces are way easier to navigate so you’re not going to be engaging with 3D assets just for the sake of doing so.

Here’s a look at a media gallery app on Magic Leap One.

Here’s a look at an avatar system.

The company seems to be distinguishing between two basic app types for developers: immersive apps and landscape apps. Landscape apps like what you see in the image above, appear to be Magic Leap’s version of 2D where interfaces are mostly flat but have some depth and live inside a box called a prism that fits spatially into your environment. It seems that you’ll be able to have several of these running simultaneously.

Immersive apps, on the other hand, like this game title, DrGrordbort— which Magic Leap has been teasing for years — respond to the geometry of the space that you are in and is thus called an immersive app.

Here’s a video of an immersive experience in action.

Moving beyond apps, the company also had a good deal to share about how you interact with what’s happening in the headset.

We got a look at some hand controls and what that may look like.

When it comes to text input, an area where AR/VR systems have had some struggles, it looks like you’ll have an appropriate amount of options. Magic Leap will have a companion smartphone app that you can type into, you can connect a bluetooth keyboard and there will also be an onscreen keyboard with dictation capabilities.

One of the big highlights of Magic Leap tech is that you’ll be able to share perspectives of these apps in a multi-player experience which we now know is called “casting,” apps that utilize these feature will just have a button that you can press to share an experience with a contact. No details on what the setup process for this looks like beyond that though.

Those are probably the most interesting insights, although there’s plenty of other stuff in the Creator Portal, but also here are a few other images to keep you going.

[gallery ids="1681679,1681678,1681680,1681705"]

It really seems like the startup is finally getting ready to showcase something. The company says that its device will begin shipping this summer and is already in developer hands. Based on what Magic Leap has shown here, the interface looks like it’ll feel very familiar as opposed to some other AR interfaces that have adopted a pretty heavy-handed futuristic look.

27 Jul 16:23

Animatronic Puppet Takes Cues From Animation Software

by Steven Dufresne

Lip syncing for computer animated characters has long been simplified. You draw a set of lip shapes for vowels and other sounds your character makes and let the computer interpolate how to go from one shape to the next. But with physical, real world puppets, all those movements have to be done manually, frame-by-frame. Or do they?

Billy Whiskers: animatronic puppet
Billy Whiskers: animatronic puppet

Stop motion animator and maker/hacker [James Wilkinson] is working on a project involving a real-world furry cat character called Billy Whiskers and decided that Billy’s lips would be moved one frame at a time using servo motors under computer control while [James] moves the rest of the body manually.

He toyed around with a number of approaches for making the lip mechanism before coming up with one that worked the way he wanted. The lips are shaped using guitar wire soldered to other wires going to servos further back in the head. Altogether there are four servos for the lips and one more for the jaw. There isn’t much sideways movement but it does enough and lets the brain fill in the rest.

On the software side, he borrows heavily from the tools used for lip syncing computer-drawn characters. He created virtual versions of the five servo motors in Adobe Animate and manipulates them to define the different lip shapes. Animate then does the interpolation between the different shapes, producing the servo positions needed for each frame. He uses an AS3 script to send those positions off to an Arduino. An Arduino sketch then uses the Firmata library to receive the positions and move the servos. The result is entirely convincing as you can see in the trailer below. We’ve also included a video which summarizes the iterations he went through to get to the finished Billy Whiskers or just check out his detailed website.

[Jame’s] work shows that there many ways to do stop motion animation, perhaps a part of what makes it so much fun. One of those ways is to 3D print a separate object for each character shape. Another is to make paper cutouts and move them around, which is what [Terry Gilliam] did for the Monty Python movies. And then there’s what many of us did when we first got our hands on a camera, move random objects around on our parent’s kitchen table and shoot them one frame at a time.

26 Jul 19:45

Amazon Rekognition Falsely Matched 28 Members of Congress to Mugshots

by Kristin Houser
Amazon's facial recognition software is far from perfect, an ACLU experiment shows. Maybe Amazon should stop marketing it to government agencies.

YOU ARE NOT A MATCH. The American Civil Liberties Union (ACLU) just spent $12.33 to test a system that could quite literally cost people their lives. On Thursday, the nonprofit organization, which focuses on preserving the rights of U.S. citizens, published a blog post detailing its test of Rekognition, a facial identification tool developed and sold by Amazon.

Using Rekognition’s default setting of 80 percent confidence (meaning the system was 80 percent certain it was correct when it signaled a match), the ACLU scanned a database of 25,000 publicly available mugshots looking to match them to photos of every sitting representative in Congress, in both the Senate and the House of Representatives.

Rekognition matched 28 Congresspeople to mugshots. It was wrong, but it found matches anyway.

THE POLICE AND P.O.C. Not only did Rekognition mistakenly believe that those 28 Congresspeople were the same people in the mugshots, the people it wrongfully matched were disproportionately people of color; while people of color make up just 20 percent of Congress, according to the ACLU, they accounted for 39 percent of the false matches.

“People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that,” wrote the ACLU in its post. “An identification — whether accurate or not — could cost people their freedom or even their lives.”

AMAZON’S RESPONSE. An Amazon spokesperson told The Verge that poor calibration was likely the reason Rekognition falsely matched so many members of Congress. “While 80 percent confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty,” said the spokesperson. They said Amazon recommends at least a 95 percent confidence threshold for any situation where a match might have significant consequences, such as when used by law enforcement agencies.

HALT! While anyone can use the system relatively cheaply (Rekognition, according to its web site, “charges you only for the images processed, minutes of video processed, and faces stored”), like the ACLU did, Amazon is actively marketing Rekognition to government and law enforcement agencies. Several are already using the service.

The ACLU isn’t the only organization actively petitioning against this use; Amazon shareholders and employees have urged the company to stop providing Rekognition to government agencies. So have dozens of civil rights groups and tens of thousands members of the public.

So far, Amazon has not indicated that it plans to comply with these requests. But perhaps, if members of Congress see just how flawed the system really is, they’ll be compelled to take action, placing a halt on any law enforcement use of facial recognition software, as the ACLU requests in its blog post.

READ MORE: Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots [ACLU]

More on Rekognition: Police Surveillance Is Getting a Helping Hand From…Amazon!

The post Amazon Rekognition Falsely Matched 28 Members of Congress to Mugshots appeared first on Futurism.

24 Jul 19:11

New Dialogflow features: how to use them to expand your Actions’ customer support capabilities

by Google Devs

Posted by Mary Chen, Product Marketing Manager, and Ralfi Nahmias, Product Manager, Dialogflow

Today at Google Cloud Next '18, Dialogflow is introducing several new beta features to expand conversational capabilities for customer support and contact centers. Let's take a look at how three of these features can be used with the Google Assistant to improve the customer care experience for your Actions.

Create Actions smarter and faster with Knowledge Connectors Beta

Building conversational Actions for content-heavy use cases, such as FAQ or knowledge base answers, is difficult. Such content is often dense and unstructured, making accurate intent modeling time-consuming and prone to error. Dialogflow's Knowledge Connectors feature simplifies the development process by understanding and automatically curating questions and responses from the content you provide. It can add thousands of extracted responses directly to your conversational Action built with Dialogflow, giving you more time for the fun parts – building rich and engaging user experiences.

Try out Knowledge Connectors in this bike shop sample

Understand user texts better with Automatic Spelling Correction

When users interact with the Google Assistant through text, it's common and natural to make spelling and grammar mistakes. When mistypes occur, Actions may not understand the user's intent, resulting in a poor followup experience. With Dialogflow's Automatic Spelling Correction, Actions built with Dialogflow can automatically correct spelling mistakes, which significantly improves intent and entity matching. Automatic Spelling Correction uses similar technology to what's used in Google Search and other Google products.

Enable Automatic Spelling Correction to improve intent and entity matching

Assign a phone number to your Action with Phone Gateway Beta

Your Action can now be used as a virtual phone agent with Dialogflow's new Phone Gateway integration. Assign a working phone number to your Action built with Dialogflow, and it can start taking calls immediately. Phone Gateway allows you to easily implement virtual agents without needing to stitch together multiple services required for building phone applications.

Set up Phone Gateway in 3 easy steps

Dialogflow's Knowledge Connectors, Automatic Spelling Correction, and Phone Gateway are free for Standard Edition agents up to certain limits; for enterprise needs, see here for more options.

We look forward to the Actions you'll build with these new Dialogflow features. Give the features a try with the Cloud Next FAQ Action we made:

  • Download the Github sample
  • Say "Hey Google, talk to Next helper" on your Google Assistant-enabled device
  • Call +1 317-978-0364 (which uses Dialogflow's Phone Gateway)

And if you're new to developing for the Google Assistant, join our Cloud Next talk this Thursday at 9am – see you on the livestream or in person!

24 Jul 18:58

Fujikura Kasei DOTITE, electrically conductive paste

by Charbax

Fujikura Kasei Co., Ltd. is a polymer resin manufacturer, they have challenged innovative technological development as a chemical manufacturer and developed high value-added products including DOTITE, electrically conductive paste, since its foundation.

24 Jul 18:57

StretchSense Sensors for Smart Clothing

by Charbax

StretchSense creates polymer soft sensing arrays as stretchy sensors for Smart Clothing. It's a 7 layer capacitive sensor measuring any movement. They are over 130 people in the factory who can create any garment size, with high consistency, low cost.

Interviewed at the IDTechEx Show! USA: http://IDTechEx.com/USA

24 Jul 18:57

First Graphene flame resistant coating

by Charbax

IDTechEx Research Director Dr Khasha Ghaffarzadeh, Interviews Dr Andy Goodwin, Advanced Materials Advisor, and Warwick Grigor, Non-Executive Chairman of First Graphene on their fire retardant additive at the IDTechEx Show! Europe 2018.

First Graphene is an advanced materials company seeking to position itself in the lowest cost decile of global graphene suppliers. It has developed an environmentally sound and safe method of converting its supplies of ultra-high grade Sri Lankan graphite into the lowest cost highest quality graphene, in bulk quantities. In so doing it is addressing the three greatest impediments to the commercialisation of graphene, being reliable quality at realistic prices in sufficient volumes to facilitate the development of applications in modern materials, energy storage devices, coatings and polymers. It aims to use these competitive advantages to access new technologies and processes and in turn gain maximum leverage to the entire graphene supply chain, from sourcing the raw material to end use, with development of associated intellectual property for licencing and sales.

For more information about First Graphene, visit

Learn more about the IDTechEx Show! Berlin: http://IDTechEx.com/Europe

Learn more about the IDTechEx Show! USA: http://IDTechEx.com/USA

Printed Electronics World - The source for global news on printed, organic and flexible electronics, interpreted by experts: http://printedelectronicsworld.com

24 Jul 13:24

Il reproduit les explosions des vaisseaux dans Star Wars… avec du coton et des LEDs

by Claire L.

Amoureux de maquettes et de la saga Star Wars, l’internaute “Plasticstarwars” s’amuse à reproduire avec beaucoup de réalisme les explosions des vaisseaux des films en utilisant… du coton et des LEDs.

Quand on a de l’imagination et une bonne dose de créativité, on n’a pas forcément besoin de grand chose pour produire des effets spéciaux impressionnants. Et ce n’est pas l’artiste que nous allons vous présenter aujourd’hui qui va venir vous dire le contraire. Connu sur le web sous le pseudonyme Plasticstarwars, ce créatif est, vous vous en doutez, un véritable passionné de la saga de science-fiction imaginée par George Lucas.

Véritable amoureux de maquettes et de cosplays Star Wars en tous genres, Plasticstarwars fabrique lui même ses propres effets spéciaux. Il simule à la perfection les explosions des vaisseaux spatiaux Star Wars en utilisant tout simplement du coton et des LEDs pour l’éclairage.

Et on doit avouer que le résultat est absolument impressionnant ! Une belle démonstration de créativité qui prouve, une fois de plus, qu’on n’a pas forcément besoin de gros moyens pour affirmer son talent. On vous laisse découvrir ci-dessous quelques unes des créations de Plasticstarwars, et on vous invite à vous rendre sur son profil Instagram pour en découvrir plus sur son univers et sa passion.

Crédits : Plasticstarwars

Crédits : Plasticstarwars

Crédits : Plasticstarwars

Crédits : Plasticstarwars

Crédits : Plasticstarwars

Crédits : Plasticstarwars

Crédits : Plasticstarwars


Imaginé par : Plasticstarwars
Source : rebrn.com

Cet article Il reproduit les explosions des vaisseaux dans Star Wars… avec du coton et des LEDs provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

16 Jul 11:15

Citroën lance des lunettes pour lutter contre le mal des transports

by Clément
Citroën vient en aide aux personnes qui sont malades en voiture et lancent les lunettes Seetroën qui, en plus d'offrir un magnifique jeu de mots, permettent d'éliminer le mal des transports.
13 Jul 19:26

A new digital picture frame is nearly indistinguishable from a real canvas

by Luke Dormehl

Digital picture frames are old news. But a new display has our attention, thanks to ambient light sensors and other smart tech that makes it indistinguishable from an actual framed work of art.

The post A new digital picture frame is nearly indistinguishable from a real canvas appeared first on Digital Trends.

03 Jul 19:38

Amazon is opening a second cashier-less Go store in Seattle this fall

by Chaim Gartenberg

Amazon is expanding its Go cashier-less supermarkets, with the company now confirming a second store coming to Seattle this fall, via a report from GeekWire.

The new location will continue the gradual roll out of Amazon’s experimental new retail stores, with locations also in the works for Chicago and San Francisco (although there’s no date yet on when those stores will be opening.) The new Seattle store is said to be almost twice as large as the current one, measuring in at 3,000 square feet, compared to the existing 1,800 square feet location.

The first Amazon Go location opened earlier this year, and has customers scan their phones when entering the store. Then, through the use of sensors and cameras, Amazon is able to automatically...

Continue reading…

29 Jun 11:53

Un étudiant invente un airbag pour smartphone

by Félix Mercadante

Pour beaucoup de monde, les smartphones sont devenu un outil essentiel dans la vie de tous les jours, presque indispensable. Alors, mieux vaut qu’il soit en bon état. Surtout au vu des prix qui ne font qu’augmenter au fur et à mesure des années. Si vous êtes souvent victime du cassage d’écran de votre smartphone, un étudiant allemand pourrait bien avoir trouvé la solution à vos problèmes. 

Philp Frenzel, étudiant ingénieur en mécatronique, à lui-même vécu cette mésaventure, après avoir jeté sa veste dans un escalier, son smarthpone tout neuf s’échappe de sa poche et se casse. Depuis, il cherche une solution pour éviter que ce genre d’incident se reproduise. Il alors une idée, un airbag pour smartphone. 

Votre smartphone ne touchera plus le sol

Le principe est assez simple, des lamelles sont insérées aux quatre coins du téléphone, lorsqu’elles détectent une chute, elles se déploient et empêche donc le téléphone de toucher le sol. Philip Frenzel a déjà déposé un brevet pour son invention , il ne lui manque plus que de l’argent pour financer sa commercialisation. 

 

L’article Un étudiant invente un airbag pour smartphone est apparu en premier sur GOLEM13.FR.

25 Jun 09:11

Tu t’y connais ?

by CommitStrip

25 Jun 09:10

AltspaceVR Debuts New Hangout Space, New Games, And Leadership Program

by David Jagneaux
AltspaceVR Debuts New Hangout Space, New Games, And Leadership Program

When AltspaceVR was shutting down, I felt a bit emotional about it. I’ve always loved MMOs, the communities they develop, and the people that play them, so when the social VR app, one of VR’s first big breakout pieces of software, was poised to end, it rocked me to my core.

I even edited together a video commemorating it and highlighting the final moments.

But, that end never truly came. Microsoft swooped in and saved the company. As a result, it gets to live on, thrive, and continue pushing out updates.

This week we learned about some ambitious new plans. First up is a brand new “Hangout space” in the game, dubbed Origins. Previously, the default hangout area was the Campfire, but the creators felt it was time for a new “sister” space as they call it. Both will coexist.

There will also be a new Community Leaders Program, so new AltspaceVR users can find experienced people easily to talk with and ask questions. All of the leaders will be wearing badges with a lightning bolt, so it’s always someone that you can trust.

Finally, there are two new game shows coming into the regular rotation: Tongue-Tied and Trivia:

Tongue-tied: Get the right word from your brain to your lips. Better be a quick thinker! Players compete to be the best at naming 6 items based on a category. The trick? You only have six seconds. Play now and see who is can think on their feet.

Trivia: Beat your friends to the answer in this fast paced trivia game between an audience of people. If you make it to the finals, you’ll face-off in a final game show on stage against the best of the best.

Do you still login to AltspaceVR? Let us know what you’ve been up to lately down in the comments below!

Tagged with: altspace, altspacevr

The post AltspaceVR Debuts New Hangout Space, New Games, And Leadership Program appeared first on UploadVR.

25 Jun 08:58

E-Dermis: Feeling At Your (Prosthetic) Fingertips

by Kristina Panos

When we lose a limb, the brain is really none the wiser. It continues to send signals out, but since they no longer have a destination, the person is stuck with one-way communication and a phantom-limb feeling. The fact that the brain carries on has always been promising as far as prostheses are concerned, because it means the electrical signals could potentially be used to control new limbs and digits the natural way.

A diagram of the e-dermis via Science Robotics.

It’s also good news for adding a sense of touch to upper-limb prostheses. Researchers at Johns Hopkins university have spent the last year testing out their concept of an e-dermis—a multi-layer approach to expanding the utility of artificial limbs that can detect the curvature and sharpness of objects.

Like real skin, the e-dermis has an outer, epidermal layer and an inner, dermal layer. Both layers use conductive and piezoresistive textiles to transmit information about tangible objects back to the peripheral nerves in the limb. E-dermis does this non-invasively through the skin using transcutaneous electrical nerve stimulation, better known as TENS. Here’s a link to the full article published in Science Robotics.

First, the researchers made a neuromorphic model of all the nerves and receptors that relay signals to the nervous system. To test the e-dermis, they used 3-D printed objects designed to be grasped between thumb and forefinger, and monitored the subject’s brain activity via EEG.

For now, the e-dermis is confined to the fingertips. Ideally, it would cover the entire prosthesis and be able to detect temperature as well as curvature. Stay tuned, because it’s next on their list.

Speaking of tunes, here’s a prosthetic arm that uses a neural network to achieve individual finger control and allows its owner to play the piano again.

Thanks for the tip, [Qes].

21 Jun 20:47

229° - Pull de randonnée Quechua NH150 - bleu clair (du S au 3XL)

3,99€ - Decathlon
» Retrait gratuit en magasin
» note 4.6 / 5 sur 57 avis

21 Jun 16:49

La différence entre un patron et un leader à travers 8 illustrations créatives

by Thomas R.

En Indonésie, la société Yukbisnis a réalisé une série d’illustrations très réussies qui montrent la différence entre un leader et un patron dans le style de management. Des dessins assez poétiques et toujours très justes !

Être un patron ou un leader, ce n’est pas tout à fait la même chose. Même si ces deux termes peuvent paraître semblables, ils induisent deux notions du management totalement éloignées. Car si dans l’imagerie populaire un patron va manager de manière directe, parfois tyrannique, et considère qu’il est dans une relation de domination vis à vis de son employé, le leader est, quant à lui, dans une démarche beaucoup plus participative. Il raisonne au nom du groupe et son objectif est clair : faire avancer la société tout en faisant progresser ses salariés.

Si on vous parle de tout ça, c’est parce qu’une société indonésienne, Yukbisnis, a révélé toute une série d’illustrations qui montrent parfaitement la différence entre un patron et un leader. Deux approches totalement éloignées du management que l’on vous propose de découvrir ci-dessous à travers différentes mises en scène qui vont forcément vous parler.

Profiter de quelqu’un vs Lui donner des responsabilités

Crédits : Yukbisnis


Punir vs Corriger

Crédits : Yukbisnis


Donner des ordres vs Encourager

Crédits : Yukbisnis


“Je” vs “Nous”

Crédits : Yukbisnis


Intimider vs Épauler

Crédits : Yukbisnis


Ordonner vs Questionner

Crédits : Yukbisnis


Montrer que c’est fait vs Montrer comment c’est fait

Crédits : Yukbisnis


S’attribuer le mérite vs Féliciter

Crédits : Yukbisnis


Imaginé par : Yukbisnis
Source : boredpanda.com

Cet article La différence entre un patron et un leader à travers 8 illustrations créatives provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

14 Jun 05:54

Amazon starts shipping its $249 DeepLens AI camera for developers

by Frederic Lardinois

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

12 Jun 20:17

Scotty Allen’s PCB Fab Tour is Like Willy Wonka’s for Hardware Geeks

by Mike Szczys

The availability of low-cost, insanely high-quality PCBs has really changed how we do electronics. Here at Hackaday we see people ditching home fabrication with increasing frequency, and going to small-run fab for their prototypes and projects. Today you can get a look at the types of factory processes that make that possible. [Scotty Allen] just published a (sponsored) tour of a PCB fab house that shows off the incredible machine tools and chemical baths that are never pondered by the world’s electronics consumers. If you have an appreciation PCBs, it’s a joy to follow a design through the process so take your coffee break and let this video roll.

Several parts of this will be very familiar. The photo-resist and etching process for 2-layer boards is more or less the same as it would be in your own workshop. Of course the panels are much larger than you’d ever try at home, and they’re not using a food storage container and homemade etchant. In fact the processes are by and large automated which makes sense considering the volume a factory like this is churning through. Even moving stacks of boards around the factory is show with automated trolleys.

Six headed PCB drilling machine (four heads in use here).

What we find most interesting about this tour is the multi-layer board process, the drilling machines, and the solder mask application. For boards that use more than two layers, the designs are built from the inside out, adding substrate and copper foil layers as they go. It’s neat to watch but we’re still left wondering how the inner layers are aligned with the outer. If you have insight on this please sound off in the comments below.

The drilling process isn’t so much a surprise as it is a marvel to see huge machines with six drill heads working on multiple boards at one time. It sure beats a Dremel drill press. The solder mask process is one that we don’t often see shown off. The ink for the mask is applied to the entire board and baked just to make it tacky. A photo process is then utilized which works much in the same way photoresist works for copper etching. Transparent film with patterns printed on it cures the solder mask that should stay, while the rest is washed away in the next step.

Boards continue through the process to get silk screen, surface treatment, and routing to separate individual boards from panels. Electrical testing is performed and the candy making PCB fab process is complete. From start to finish, seeing the consistency and speed of each step is very satisfying.

Looking to do a big run of boards? You may find [Brian Benchoff’s] panelization guide of interest.