Shared posts

24 Sep 20:52

Une ardoise magique pilotée par un Raspberry Pi

by Pierre Lecourt

Le principe d’une ardoise magique est assez simple, une poudre magnétique colle à une vitre la rendant opaque, en manipulant de boutons, l’un pour l’axe horizontal, l’autre pour l’axe vertical, on peut dessiner sur cette surface en enlevant cette poudre.

2018-09-24 18_22_19-minimachines.net

Deux axes pilotés par des boutons rotatifs, voilà une interface parfaite pour être pilotée par des moteurs pas à pas. Il suffit de donner des coordonnées d’origine à ces boutons, là où la tête qui écrit sur l’ardoise magique est dans un coin de la zone visible. Puis de vérifier le nombre de tours de bouton nécessaires pour atteindre le coin à la diagonale opposée. 

En divisant ensuite la largeur et la hauteur en éléments de mesure de manière à pouvoir coder une échelle de déplacement, il est alors possible de demander à une carte Raspberry Pi de piloter des moteurs pour faire avancer ou reculer les deux axes. Il ne reste plus qu’à injecter dans la carte une image simplifiée de manière à la reproduire en retranscrivant les traites en coordonnées pour les deux axes sur la surface pour voir s’animer l’ardoise et dessiner à votre place l’image voulue.

2018-09-24 17_37_45-minimachines.net

C’est très exactement comme cela que fonctionnent les CNC, ces machines qui découpent bois, métal, plastique avec une fraise ou un laser. Le Raspberry Pi se chargera de déterminer un chemin à effectuer pour relier les points identifiés du dessin de manière à le reproduite au mieux.  

Comme d’habitude, l’ensemble des éléments nécessaires pour le montage, le code nécessaire et l’ensemble des instructions sont disponibles sur la page du créateur de cette ardoise vraiment magique. Si je doute de l’intérêt du dispositif en tant que tel – bien qu’il soit possible d’imaginer des usages alternatifs comme l’affichage de messages ou d’autres éléments via l’ardoise – c’est un super support pour expliquer le fonctionnement de nombreuses machines modernes : Imprimants, CNC ou imprimante 3D par exemple.

Une ardoise magique pilotée par un Raspberry Pi © MiniMachines.net. 2017

19 Sep 20:33

Curve-Fitting

Cauchy-Lorentz: "Something alarmingly mathematical is happening, and you should probably pause to Google my name and check what field I originally worked in."
12 Sep 06:16

Dropbox may be adding an e-signature feature, user survey indicates

by Sarah Perez

A recent user survey sent out by Dropbox confirms the company is considering the addition of an electronic signature feature to its Dropbox Professional product, which it refers to simply as “E-Signature from Dropbox.” The point of the survey is to solicit feedback about how likely users are to use such a product, how often, and if they believe it would add value to the Dropbox experience, among other things.

While a survey alone doesn’t confirm the feature is in the works, it does indicate how Dropbox is thinking about its professional product.

According to the company’s description of E-Signature, the feature would offer “a simple, intuitive electronic signature experience for you and your clients” where documents could be sent to others to sign in “just a few clicks.”

The clients also wouldn’t have to be Dropbox users to sign, the survey notes. And the product would offer updates on every step of the signature workflow, including notifications and alerts about the document being opened, whether the client had questions, and when the document was signed. After the signed document is returned, the user would receive the executed copy saved right in their Dropbox account for easy access, the company says.

In addition to soliciting general feedback about the product, Dropbox also asked survey respondents about their usage of other e-signature brands, like Adobe e-Sign, DocuSign, HelloSign, and PandaDoc, as well as their usage other more traditional methods, like in-person signing and documents sent over mail.

Given the numerous choices on the market today, it’s unclear if Dropbox will choose to move forward and launch such a product. However, if it did, the benefit of having its own E-Signature service would be its ability to be more tightly integrated into Dropbox’s overall product experience. It could also push more business users to upgrade from a basic consumer account to the Professional tier.

This kind of direct integration would make sense in the context of Dropbox’s business workflows. If, for instance, a company is working on a contract workflow, being able to move to the signature phase without changing context (or to share with a user who doesn’t use Dropbox) could add tremendous value over and above simply storing the document.

Companies like Dropbox have been looking for ways to move beyond pure storage to give customers the ability to collaborate and share that content, particularly without forcing them to leave the application to complete a job. This ability to do work without task switching is something that Dropbox has been working on with Dropbox Paper.

While it remains to be seen how they would implement such a solution, it might be a case where it would make more sense to partner with existing vendors or buy a smaller player than it would be build such functionality from scratch — although it’s not clear from a simple survey what their ultimate goal would be at this point.

Dropbox has not yet responded to requests for comment.

06 Sep 16:29

BMW launches a personal voice assistant for its cars

by Frederic Lardinois

At TechCrunch Disrupt SF 2018, BMW today premiered its digital personal assistant for its cars, the aptly named BMW Intelligent Personal Assistant. But you won’t have to say “Hey, BMW Intelligent Personal Assistant” to wake it up. You can give it any name you want.

The announcement comes only a few weeks after BMW also launched its integration with Amazon’s Alexa, but it’s worth stressing that these are complementary technologies. BMW’s own assistant is all about your car, while its partnerships with Amazon and also Microsoft enables other functions that aren’t directly related to your driving experience.

“BMW’s Personal Assistant gets to know you over time with each of your voice commands and by using your car,” BMW’s senior vice president Digital Products and Services, Dieter May, said. “It gets better and better every single day.”

Sticking with the precedents of Microsoft’s, Google’s and Amazon’s assistants, the voice of BMW’s assistant is female (though BMW often uses male names and pronouns in its press materials). Over time, it’ll surely get more voices.

So what can the BMW assistant do? Once you are in a compatible car, you’ll be able to control all of the standard in-car features by voice. Think navigation and climate control (“Hey John, I’m cold”), or check the tire pressure, oil level and other engine settings.

You also can have some more casual conversations (“Hey Charlie, what’s the meaning of life?”), but what’s maybe more important is that the assistant will continuously learn more about you. Right now, the assistant can remember your preferred settings, but over time, it’ll learn more and even proactively suggest changes. “For example, driving outside the city at night, the personal assistant could suggest you the BMW High Beam Assist,” May noted.

In addition, you’ll also be able to use the assistant to learn more about your car’s features, something that’s getting increasingly hard as cars become computers on wheels with ever-increasing complexity.

BMW built the assistant on top of Microsoft’s Azure cloud and conversational technologies. Azure has long been BMW’s preferred public cloud and the two companies have had a close relationship for years now. BMW has, after all, also integrated some support for accessing Office 365 files and using Skype for Business in its cars, with support for Cortana likely coming soon, too.

That all sounds a bit confusing, though. Why have three assistants in the car, after all. All that “Hey Alexa,” “Hey Charlie,” “Hey Cortana” is going to get a bit confusing after all. But BMW argues that each one has a specialty. For Alexa that may be shopping while Cortana is all about getting work done and the BMW is all about your car. And if everything else fails, BMW’s existing concierge service is still there and lets you talk to a human.

The assistant feature will be available in a basic version with support for 23 languages and markets, starting March 2019. In the U.S., Germany, U.K., Italy, France, Spain, Switzerland, Austria, Brazil and Japan, the service will feature more features like support for weather search, point of interest search and access to music in March 2019. In those markets, the assistant will also feature a more natural voice. In China, this expanded version will go live a bit later and is currently scheduled for May 2019. In those markets, it’ll roll out to cars that support the BMW Operating System 7.0 as part of the company’s Live Cockpit Professional program.

If you order a BMW 3 Series, starting in November, the assistant will be available to you right away and included for the first three years of your ownership. For new X5, Z4 and 8 Series models, BMW Assistant support will arrive in the form of an over-the-air software upgrade starting in March 2019.

04 Sep 21:02

IFA 2018 – Deutsche Telekom dévoile son assistant personnel Magenta

by Valentine De Brye
 © Deutsche Telekom Alors que le salon berlinois touche à sa fin, l'opérateur allemand Deutsche Telekom a dévoilé au grand public son premier assistant domestique baptisé "Magenta". Ce dernier entend concurrencer les mastodontes du secteur, à savoir Google et Amazon et leurs assistants respectifs Home et Echo.  Cet appareil ressemble peu ou prou...
31 Aug 14:10

A l’IFA, les assistants vocaux se veulent indispensables

by La rédaction

Vitrines privilégiées de ces logiciels d'intelligence artificielle, plus de 100 millions d'enceintes connectées seront vendues d'ici fin 2018.

The post A l’IFA, les assistants vocaux se veulent indispensables appeared first on FrenchWeb.fr.

31 Aug 14:09

Huawei’s new Kirin 980 chip is so fast, it can probably slow down time

by Andy Boxall

Huawei has announced the Kirin 980 mobile processor, which promises a considerable increase in speed and efficiency over not only the old Kirin 970, but the Qualcomm Snapdragon 845 as well.

The post Huawei’s new Kirin 980 chip is so fast, it can probably slow down time appeared first on Digital Trends.

31 Aug 06:10

The Google Assistant is now bilingual 

by Frederic Lardinois

The Google Assistant just got more useful for multilingual families. Starting today, you’ll be able to set up two languages in the Google Home app and the Assistant on your phone and Google Home will then happily react to your commands in both English and Spanish, for example.

Today’s announcement doesn’t exactly come as a surprise, given that Google announced at its I/O developer conference earlier this year that it was working on this feature. It’s nice to see that this year, Google is rolling out its I/O announcements well before next year’s event. That hasn’t always been the case in the past.

Currently, the Assistant is only bilingual and it still has a few languages to learn. But for the time being, you’ll be able to set up any language pair that includes English, German, French, Spanish, Italian and Japanese. More pairs are coming in the future and Google also says it is working on trilingual support, too.

Google tells me this feature will work with all Assistant surfaces that support the languages you have selected. That’s basically all phones and smart speakers with the Assistant, but not the new smart displays, as they only support English right now.

While this may sound like an easy feature to implement, Google notes this was a multi-year effort. To build a system like this, you have to be able to identify multiple languages, understand them and then make sure you present the right experience to the user. And you have to do all of this within a few seconds.

Google says its language identification model (LangID) can now distinguish between 2,000 language pairs. With that in place, the company’s researchers then had to build a system that could turn spoken queries into actionable results in all supported languages. “When the user stops speaking, the model has not only determined what language was being spoken, but also what was said,” Google’s VP Johan Schalkwyk and Google Speech engineer Lopez Moreno write in today’s announcement. “Of course, this process requires a sophisticated architecture that comes with an increased processing cost and the possibility of introducing unnecessary latency.”

If you are in Germany, France or the U.K., you’ll now also be able to use the bilingual assistant on a Google Home Max. That high-end version of the Google Home family is going on sale in those countries today.

In addition, Google also today announced that a number of new devices will soon support the Assistant, including the tado° thermostats, a number of new security and smart home hubs (though not, of course, Amazon’s own Ring Alarm), smart bulbs and appliances, including the iRobot Roomba 980, 896 and 676 vacuums. Who wants to have to push a button on a vacuum, after all.

27 Aug 06:16

Deepfakes for dancing: you can now use AI to fake those dance moves you always wanted

by James Vincent

Artificial intelligence is proving to be a very capable tool when it comes to manipulating videos of people. Face-swapping deepfakes have been the most visible example, but new applications are being found every day. The latest? Call it deepfakes for dancing. It uses AI to read someone’s dance moves and copy them on to a target body.

The actual science here was done by a quartet of researchers from UC Berkley. As they describe in a paper posted on arXiV, their system is comprised of a number of discrete steps. First, a video of the target is recorded, and a sub-program turns their movements into a stick figure. (Quite of a lot of video is needed to get a good-quality transfer: around 20 minutes of footage at 120 frames per second.)...

Continue reading…

26 Aug 15:52

When Tim Burton Meets Superheroes [Gallery]

by Geeks are Sexy

We all know Tim Burton for his special dark and gothic style, but apart from his Batman movies (1989, 1992,) I’ve never seen the man create art that focuses on superheroes. But now, thanks to artist Andrew Tarusov, we now know what superheroes would look like when drawn using the style of “The Nightmare Before Christmas.) Check it out!

tim

tim1

tim1a

tim2

tim3

tim4

tim5

tim6

tim7

tim8

And here’s a video presenting some of the artist’s Disney-themed work:

[Source: Andrew Tarusov]

The post When Tim Burton Meets Superheroes [Gallery] appeared first on Geeks are Sexy Technology News.

25 Aug 21:42

Avengers Marvel Legends Series Infinity Gauntlet Articulated Electronic Fist

by Geeks are Sexy

Ever felt like you needed to have so much control over the Universe that with a swift snap of your fingers, you could wipe out half it’s population (dated that guy once…)? Well now you can. thanks to this amazing electronic, articulated Infinity gauntlet!  Let your inner Thanos rage out while you imagine your ennemies being vaporized into thin air!

-Articulated fingers with fist-lock display mode
-Movie-inspired sound effects
-Pulsating stone glow light effects
-Premium roleplay articulated electronic fist
-Collector-inspired attention to detail

[Avengers Marvel Legends Series Infinity Gauntlet Articulated Electronic Fist]

The post Avengers Marvel Legends Series Infinity Gauntlet Articulated Electronic Fist appeared first on Geeks are Sexy Technology News.

25 Aug 21:33

L'Oréal et ModiFace s'allient à Facebook dans la réalité augmentée

ModiFace, la société de réalité augmentée et d'intelligence artificielle récemment acquise par le groupe L'Oréal, annonce une collaboration de long terme avec Facebook afin de créer de nouvelles expériences de réalité augmentée intégrées dans "Facebook Camera".
17 Aug 17:17

Tesla’s Investors Make the Company. They May Also Ruin Its CEO.

by Victor Tangermann
In an interview with the New York Times, Elon Musk appeared to be often "choked up." Are short-sellers really getting to Tesla's CEO?

Elon Musk is unraveling before our very eyes. The sleepless nights he’s spent at the Tesla Gigafactory operating woefully behind schedule; his rather unpredictable behavior on Twitter that’s brought him, and his company, under federal scrutiny. It’s all making the CEO of one of the world’s most innovative car companies more of a liability than an asset.

Musk is giving us a better picture of the toll this has all taken on him. In a recent interview with the New York Times, Musk was frequently “choked up” when he told reporters about how his personal life has suffered because of his (admittedly ambitious) work. “There were times when I didn’t leave the factory for three or four days — days when I didn’t go outside,” he tells reporters over an hour long call. “This has really come at the expense of seeing my kids. And seeing friends.” And he’s still not getting enough sleep. “It is often a choice of no sleep or Ambien.”

The interview comes after a very rough couple of weeks for both Musk and his electric car company. To recap, Musk tweeted about plans for taking Tesla private at $420 a share (about a 20 percent bump over stock prices at the time), with “funding secured.”

That funding was supposed to come from a Saudi Arabian sovereign wealth fund, but Reuters revealed that the funding was anything but secure — the Saudis have “shown no interest so far in financing Tesla Inc,” despite acquiring a 5 percent stake earlier this year.

To add to Musk’s troubles, his rash tweets — that even surprised the company’s board and made stocks go crazy before they were frozen — landed him in hot water with the U.S. Securities and Exchange Commission, which is formally investigating whether Musk manipulated markets illegally.

Musk, though, pointed to a single source of all his suffering: it’s the short sellers. You may have heard of “buy low, sell high.” In short-selling, you are reversing this approach in day trading — you sell high and buy low. You borrow shares through a broker to sell high, then buying them back after the stock price falls. It only works when a company’s stock falls, though. So short-sellers only benefit when the company — and Musk himself — fails.

In his Times interview, Musk says he’s been suffering “at least a few months of extreme torture from the short-sellers, who are desperately pushing a narrative that will possibly result in Tesla’s destruction.”

Short-sellers have long been a thorn in Musk’s side. It’s not clear why they’re bothering Musk now more than before, but it might be because Tesla’s difficult year (Model 3 production delays and Model X crash that has dropped stock prices since March) made more short-sellers turn to Tesla, or because Musk’s own activities (say, a positive quarterly earnings call) have cost the short-sellers huge sums. And his often emotional response to short-sellers hasn’t encouraged them to back off.

Musk isn’t taking their attacks sitting down, though. Taking Tesla private would finally put an end to short-sellers — you can’t freely sell and buy private company stock — and allow Musk to finally get some much needed sleep.

But it’s not clear that Tesla will privatize anytime soon, and it won’t be easy if it does happen — the sheer uncertainty and SEC investigation are bound to scare off investors.

One way to pull it off? Hiring number two executive who could take some of that pressure off of his shoulders. An unstable Musk who “alternated between laughter and tears” might not be the most attractive to investors. But an even-keeled, level-headed, well-rested number two could help Musk.

While Musk says there’s “no active search right now” according to the interview, Tesla has tried to poach high-ranking executives in the past, including Facebook’s chief operating officer Sheryl Sandberg.

For now, it looks like Musk is staying in his job, despite speculation earlier this year that he’d be ousted (and, apparently, his own desire to leave): “If you have anyone who can do a better job, please let me know. They can have the job. Is there someone who can do the job better? They can have the reins right now,” Musk tells the Times.

But there are other ways Musk could rest easier at night. He could care less about those pesky short-sellers. Or he could heed the Tesla board and stop tweeting.

The post Tesla’s Investors Make the Company. They May Also Ruin Its CEO. appeared first on Futurism.

15 Aug 12:26

This robot maintains tender, unnerving eye contact

by Devin Coldewey

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

15 Aug 12:25

This Lego ISD Tyrant Star Destroyer Has Over 35,000 Pieces [Pics]

by Geeks are Sexy

Imgur user doomhandle build this gigantic ISD Tyrant Star Destroyer using approximately 35,000 Lego bricks! This thing is over 56 inches (1.4m) long and weighs about 70 lbs (32kg.) You can see a comparison between doomhandle’s model and the original ISD LEGO Tyrant set (75055) above.

Here’s a video walkthrough showing off the ship’s interior levels, crew, etc.:

[Source: doomhandle on Imgur]

The post This Lego ISD Tyrant Star Destroyer Has Over 35,000 Pieces [Pics] appeared first on Geeks are Sexy Technology News.

14 Aug 13:49

Amazon Dash-powered WePlenish smart pantry device orders snacks, coffee, more

by Clayton Moore

A Florida-based startup has started taking pre-orders for its new smart pantry, the WePlenish Java Smart Container, which can automatically order snacks and coffee when supplies start to run low.

The post Amazon Dash-powered WePlenish smart pantry device orders snacks, coffee, more appeared first on Digital Trends.

10 Aug 19:26

Dog Thoughts Translator

by staff

Deepen the bond you have with your canine companion with this dog thoughts translator device. It analyzes animal thought patterns to translate their brain activity into human language. It’s mostly thoughts of eating cat poop and chasing squirrels though.

Check it out

$300.00

10 Aug 16:15

Phénomène inexpliqué

by CommitStrip

10 Aug 12:38

Ibuki is the 10-year-old robot child that will haunt your dreams

by John Biggs

Professor Hiroshi Ishiguro makes robots in Osaka. His latest robot, Ibuki, is one for the nightmare catalog: It’s a robotic 10-year-old boy that can move on little tank treads and has a soft rubbery face and hands.

The robot has complete vision routes that can scan for faces and it has a sort of half-track system for moving around. It has “involuntary” motions like blinking and little head bobs but is little more than a proof-of-concept right now, especially considering its weird robo-skull is transparent.

“An Intelligent Robot Infrastructure is an interaction-based infrastructure. By interacting with robots, people can establish nonverbal communications with the artificial systems. That is, the purpose of a robot is to exist as a partner and to have valuable interactions with people,” wrote Ishiguro. “Our objective is to develop technologies for the new generation information infrastructures based on Computer Vision, Robotics and Artificial Intelligence.”

Ishiguro is a roboticist who plays on the borders of humanity. He made a literal copy of himself in 2010. His current robots are even more realistic, and Ibuki’s questing face and delicate hands are really very cool. That said, expect those soft rubber hands to one day close around your throat when the robots rise up to take back what is theirs. Good luck, humans!

09 Aug 08:09

This robot uses AI to find Waldo, thereby ruining Where’s Waldo

by Dami Lee

If you’re totally stumped on a page of Where’s Waldo and ready to file a missing persons report, you’re in luck. Now there’s a robot called There’s Waldo that’ll find him for you, complete with a silicone hand that points him out.

Built by creative agency Redpepper, There’s Waldo zeroes in and finds Waldo with a sniper-like accuracy. The metal robotic arm is a Raspberry Pi-controlled uArm Swift Pro which is equipped with a Vision Camera Kit that allows for facial recognition. The camera takes a photo of the page, which then uses OpenCV to find the possible Waldo faces in the photo. The faces are then sent to be analyzed by Google’s AutoML Vision service, which has been trained on photos of Waldo. If the robot determines a match with 95...

Continue reading…

03 Aug 20:18

Want an extra arm? A third thumb? Check out these awesome robotic appendages

by Luke Dormehl

Ever wished for a pair of extra arms, a third leg, or a sixth finger on each hand? Check out this list of amazing robo-prosthetics which promise to take your multitasking to the next level.

The post Want an extra arm? A third thumb? Check out these awesome robotic appendages appeared first on Digital Trends.

03 Aug 20:13

C’en est fini de Musical.ly !

by Morgan Fromentin

Peut-être avez vous déjà vu des vidéos de gens dansant et chantant n'importe où, sur une musique bien connue. C'est ce que proposait le service Musical.ly. Au passé, parce que le service a disparu !

Lire la suite

02 Aug 13:05

Google Home’s too boring? You want Gatebox’s cute virtual character in your life

by Andy Boxall

Do you want to share your life with a virtual character? If so, then your dream has taken one step closer to reality thanks to Gatebox. Like Alexa, it will control your smart home; but the character that lives inside the Gatebox has the ability to recognize, interact, and even celebrate with you.

The post Google Home’s too boring? You want Gatebox’s cute virtual character in your life appeared first on Digital Trends.

02 Aug 12:55

Master Replicas Group’s HAL-9000 Bluetooth speaker is now on Indiegogo

by Andrew Liptak

2018 marks the 50th anniversary of Stanley Kubrick’s classic science fiction film 2001: A Space Odyssey. To commemorate the occasion, Master Replicas Group has opened preorders for its replica of the film’s iconic computer HAL-9000.

The company announced earlier this year that it would produce the replica, which would also double as a speaker. At the time, Master Replicas Group CEO Steve Dymszo told The Verge that the prop would use Amazon’s Echo technology to turn it into a functional virtual assistant, and the company would begin taking preorders in April, with shipping occurring this fall. That timeline seems to have slipped: Master Replicas Group says that it redesigned the entire device several months ago. With preorders starting...

Continue reading…

30 Jul 07:09

A rare Magic: The Gathering card just sold for more than a Porsche

by Bruce Brown

An Alpha Black Lotus game card just sold on eBay for $88,000, proving that collectible game cards from Magic: The Gathering continue to appreciate in value. The card's scarcity and Gem Mint condition led to its record-breaking sale price.

The post A rare Magic: The Gathering card just sold for more than a Porsche appeared first on Digital Trends.

27 Jul 17:30

Magic Leap details what its mixed reality OS will look like

by Lucas Matney

Magic Leap just updated its developer documentation and a host of new details and imagery are being spread around on Reddit and Twitter, sharing more specifics on how the company’s Lumin OS will look like on their upcoming Magic Leap One device.

It’s mostly a large heaping of nitty-gritty details, but we also get a more prescient view into how Magic Leap sees interactions with their product looking and the directions that developers are being encouraged to move in. Worth noting off the bat that these gifs/images appear to be mock-ups or screenshots rather than images shot directly through Magic Leap tech.

Alright, first, this is what the Magic Leap One home screen will apparently look like, it’s worth noting that it appears that Magic Leap will have some of its own stock apps on the device, which was completely expected but they haven’t discussed much about.

Also worth noting is that Magic Leap’s operating system by and large looks like most other operating systems, they seem to be well aware that flat interfaces are way easier to navigate so you’re not going to be engaging with 3D assets just for the sake of doing so.

Here’s a look at a media gallery app on Magic Leap One.

Here’s a look at an avatar system.

The company seems to be distinguishing between two basic app types for developers: immersive apps and landscape apps. Landscape apps like what you see in the image above, appear to be Magic Leap’s version of 2D where interfaces are mostly flat but have some depth and live inside a box called a prism that fits spatially into your environment. It seems that you’ll be able to have several of these running simultaneously.

Immersive apps, on the other hand, like this game title, DrGrordbort— which Magic Leap has been teasing for years — respond to the geometry of the space that you are in and is thus called an immersive app.

Here’s a video of an immersive experience in action.

Moving beyond apps, the company also had a good deal to share about how you interact with what’s happening in the headset.

We got a look at some hand controls and what that may look like.

When it comes to text input, an area where AR/VR systems have had some struggles, it looks like you’ll have an appropriate amount of options. Magic Leap will have a companion smartphone app that you can type into, you can connect a bluetooth keyboard and there will also be an onscreen keyboard with dictation capabilities.

One of the big highlights of Magic Leap tech is that you’ll be able to share perspectives of these apps in a multi-player experience which we now know is called “casting,” apps that utilize these feature will just have a button that you can press to share an experience with a contact. No details on what the setup process for this looks like beyond that though.

Those are probably the most interesting insights, although there’s plenty of other stuff in the Creator Portal, but also here are a few other images to keep you going.

[gallery ids="1681679,1681678,1681680,1681705"]

It really seems like the startup is finally getting ready to showcase something. The company says that its device will begin shipping this summer and is already in developer hands. Based on what Magic Leap has shown here, the interface looks like it’ll feel very familiar as opposed to some other AR interfaces that have adopted a pretty heavy-handed futuristic look.

27 Jul 16:23

Animatronic Puppet Takes Cues From Animation Software

by Steven Dufresne

Lip syncing for computer animated characters has long been simplified. You draw a set of lip shapes for vowels and other sounds your character makes and let the computer interpolate how to go from one shape to the next. But with physical, real world puppets, all those movements have to be done manually, frame-by-frame. Or do they?

Billy Whiskers: animatronic puppet
Billy Whiskers: animatronic puppet

Stop motion animator and maker/hacker [James Wilkinson] is working on a project involving a real-world furry cat character called Billy Whiskers and decided that Billy’s lips would be moved one frame at a time using servo motors under computer control while [James] moves the rest of the body manually.

He toyed around with a number of approaches for making the lip mechanism before coming up with one that worked the way he wanted. The lips are shaped using guitar wire soldered to other wires going to servos further back in the head. Altogether there are four servos for the lips and one more for the jaw. There isn’t much sideways movement but it does enough and lets the brain fill in the rest.

On the software side, he borrows heavily from the tools used for lip syncing computer-drawn characters. He created virtual versions of the five servo motors in Adobe Animate and manipulates them to define the different lip shapes. Animate then does the interpolation between the different shapes, producing the servo positions needed for each frame. He uses an AS3 script to send those positions off to an Arduino. An Arduino sketch then uses the Firmata library to receive the positions and move the servos. The result is entirely convincing as you can see in the trailer below. We’ve also included a video which summarizes the iterations he went through to get to the finished Billy Whiskers or just check out his detailed website.

[Jame’s] work shows that there many ways to do stop motion animation, perhaps a part of what makes it so much fun. One of those ways is to 3D print a separate object for each character shape. Another is to make paper cutouts and move them around, which is what [Terry Gilliam] did for the Monty Python movies. And then there’s what many of us did when we first got our hands on a camera, move random objects around on our parent’s kitchen table and shoot them one frame at a time.

26 Jul 19:45

Amazon Rekognition Falsely Matched 28 Members of Congress to Mugshots

by Kristin Houser
Amazon's facial recognition software is far from perfect, an ACLU experiment shows. Maybe Amazon should stop marketing it to government agencies.

YOU ARE NOT A MATCH. The American Civil Liberties Union (ACLU) just spent $12.33 to test a system that could quite literally cost people their lives. On Thursday, the nonprofit organization, which focuses on preserving the rights of U.S. citizens, published a blog post detailing its test of Rekognition, a facial identification tool developed and sold by Amazon.

Using Rekognition’s default setting of 80 percent confidence (meaning the system was 80 percent certain it was correct when it signaled a match), the ACLU scanned a database of 25,000 publicly available mugshots looking to match them to photos of every sitting representative in Congress, in both the Senate and the House of Representatives.

Rekognition matched 28 Congresspeople to mugshots. It was wrong, but it found matches anyway.

THE POLICE AND P.O.C. Not only did Rekognition mistakenly believe that those 28 Congresspeople were the same people in the mugshots, the people it wrongfully matched were disproportionately people of color; while people of color make up just 20 percent of Congress, according to the ACLU, they accounted for 39 percent of the false matches.

“People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that,” wrote the ACLU in its post. “An identification — whether accurate or not — could cost people their freedom or even their lives.”

AMAZON’S RESPONSE. An Amazon spokesperson told The Verge that poor calibration was likely the reason Rekognition falsely matched so many members of Congress. “While 80 percent confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty,” said the spokesperson. They said Amazon recommends at least a 95 percent confidence threshold for any situation where a match might have significant consequences, such as when used by law enforcement agencies.

HALT! While anyone can use the system relatively cheaply (Rekognition, according to its web site, “charges you only for the images processed, minutes of video processed, and faces stored”), like the ACLU did, Amazon is actively marketing Rekognition to government and law enforcement agencies. Several are already using the service.

The ACLU isn’t the only organization actively petitioning against this use; Amazon shareholders and employees have urged the company to stop providing Rekognition to government agencies. So have dozens of civil rights groups and tens of thousands members of the public.

So far, Amazon has not indicated that it plans to comply with these requests. But perhaps, if members of Congress see just how flawed the system really is, they’ll be compelled to take action, placing a halt on any law enforcement use of facial recognition software, as the ACLU requests in its blog post.

READ MORE: Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots [ACLU]

More on Rekognition: Police Surveillance Is Getting a Helping Hand From…Amazon!

The post Amazon Rekognition Falsely Matched 28 Members of Congress to Mugshots appeared first on Futurism.

24 Jul 18:57

StretchSense Sensors for Smart Clothing

by Charbax

StretchSense creates polymer soft sensing arrays as stretchy sensors for Smart Clothing. It's a 7 layer capacitive sensor measuring any movement. They are over 130 people in the factory who can create any garment size, with high consistency, low cost.

Interviewed at the IDTechEx Show! USA: http://IDTechEx.com/USA

24 Jul 18:57

First Graphene flame resistant coating

by Charbax

IDTechEx Research Director Dr Khasha Ghaffarzadeh, Interviews Dr Andy Goodwin, Advanced Materials Advisor, and Warwick Grigor, Non-Executive Chairman of First Graphene on their fire retardant additive at the IDTechEx Show! Europe 2018.

First Graphene is an advanced materials company seeking to position itself in the lowest cost decile of global graphene suppliers. It has developed an environmentally sound and safe method of converting its supplies of ultra-high grade Sri Lankan graphite into the lowest cost highest quality graphene, in bulk quantities. In so doing it is addressing the three greatest impediments to the commercialisation of graphene, being reliable quality at realistic prices in sufficient volumes to facilitate the development of applications in modern materials, energy storage devices, coatings and polymers. It aims to use these competitive advantages to access new technologies and processes and in turn gain maximum leverage to the entire graphene supply chain, from sourcing the raw material to end use, with development of associated intellectual property for licencing and sales.

For more information about First Graphene, visit

Learn more about the IDTechEx Show! Berlin: http://IDTechEx.com/Europe

Learn more about the IDTechEx Show! USA: http://IDTechEx.com/USA

Printed Electronics World - The source for global news on printed, organic and flexible electronics, interpreted by experts: http://printedelectronicsworld.com