Shared posts

17 Dec 21:26

Dized : une solution pour dicter les règles des jeux de société

by Andy
DizedQuoi de plus amusant que de jouer aux jeux de société en famille ou bien entre amis le temps d’un après-midi ou d’une soirée ? Par contre, il y a toujours une partie moins amusante, le moment où vous devez expliquer les règles du jeu en lisant un long manuel. Vos interlocuteurs ne sont pas toujours à […]
16 Dec 12:56

Google kills its Tango augmented reality platform, shifting focus to ARCore

by Lucas Matney
 Google announced today that it’s shutting down its high-end smartphone augmented reality platform, Tango, in order to focus on the more mass market ARCore product. The company had already confirmed this much to us when they announced ARCore in August, but now we have an official timeline for Tango’s demise. Read More
12 Dec 15:49

Descript gets $5M to make sound editing like a word document

by Matthew Lynley
 Right before jumping on the phone Friday afternoon, Andrew Mason, who then ran a walking tour startup called Detour and ran Groupon, was hand-correcting a transcription of a speech by John F. Kennedy — which was transcribed by some new software he and his team built in-house. But Descript, Mason’s new startup that’s spun out from Detour, isn’t designed to just… Read More
11 Dec 22:46

A stretchable battery, powered by sweat, could revolutionize wearables

by Luke Dormehl

Scientists at Binghamton University in New York developed a breakthrough stretchy, textile-based, bacteria-powered bio-battery which could one day be used to power wearable devices.

The post A stretchable battery, powered by sweat, could revolutionize wearables appeared first on Digital Trends.

10 Dec 14:30

Spice up your dice with Bluetooth

by Sven Gregori

There’s no shortage of projects that replace your regular board game dice with an electronic version of them, bringing digital features into the real world. [Jean] however goes the other way around and brings the real world into the digital one with his Bluetooth equipped electronic dice.

These dice are built around a Simblee module that houses the Bluetooth LE stack and antenna along with an ARM Cortex-M0 on a single chip. Adding an accelerometer for side detection and a bunch of LEDs to indicate the detected side, [Jean] put it all on a flex PCB wrapped around the battery, and into a 3D printed case that is just slightly bigger than your standard die.

While they’ll work as simple LED lighted replacement for your regular dice as-is, their biggest value is obviously the added Bluetooth functionality. In his project introduction video placed after the break, [Jean] shows a proof-of-concept game of Yahtzee displaying the thrown dice values on his mobile phone. Taking it further, he also demonstrates scenarios to map special purposes and custom behavior to selected dice and talks about his additional ideas for the future.

After seeing the inside of the die, it seems evident that getting a Bluetooth powered D20 will unfortunately remain a dream for another while — unless, of course, you take this giant one as inspiration for the dimensions.

Filed under: Wireless Hacks
08 Dec 22:46

Apple is reportedly buying Shazam

by Micah Singleton

Apple is finalizing a deal to acquire Shazam, the app that lets you identify songs, movies, and TV shows from an audio clip, according to TechCrunch. The deal is reportedly for $400 million, according to Recode, which also confirmed the news.

For Apple, the obvious benefit of acquiring Shazam is the company’s music and sound recognition technologies. It will also save some money on the commissions Apple pays Shazam for sending users to its iTunes Store to buy content, which made up the majority of Shazam’s revenue in 2016, and drove 10 percent of all digital download sales, according to The Wall Street Journal.

A side benefit is if Apple decides to shut down the app, it will hurt competing streaming services like Spotify and Google Play...

Continue reading…

07 Dec 20:59

Kinecting to a Post-Kinect World

by Greg Duncan

Forever Friend of the Gallery, Vangos Pterneas, has some great advice on what to do in a Post-Kinect world.

Kinect is dead! Now what?


BREAKING NEWS! Microsoft has officially killed the Kinect.

Today, Alex Kipman (creator of the Kinect) and Matthew Lapsen (XBOX Marketing) announced that Microsoft will stop manufacturing the Kinect sensor. Source: Co.Design


What about your existing customers?

Stopping the production of the sensor means that Kinect will be alive for at least one year. If you have already developed Kinect applications, your customers will be able to use them as-is without any compatibility issues. In terms of software, no changes are required.


Should you dump your current Kinect projects?

No! Kinect for XBOX ONE is not going to end right away. Hardware does not just disappear. Even Kinect for XBOX 360 is still available, 4 years after it was replaced by Kinect v2 and 1 year after it was discontinued.

There are tons of different Kinect projects in a variety of industries:


Kinect alternatives

Thankfully, the developer community is very active. New companies have emerged and we already have a lot of alternatives to the Kinect. Today, I’m going to present my top choices. Keep in mind that I am only presenting the sensors I have used professionally. If you have another suggestion, feel free to write it in the comments below!


My choice: Orbbec

As a business owner and Software Engineer, I have to make a choice that covers the business needs of my clients and customers. Even though OpenPose seems to be the future, Orbbec is, by far, the most reliable option right now. Their team has the know-how to deliver exceptional products and services. Orbbec has both the hardware and the software to replace Kinect.

Disclaimer: Josh Blake, co-founder of Orbbec, was the man who nominated me as a Microsoft MVP. I know he’s been doing great work with Orbbec and I would like to see Orbbec taking on Kinect’s market share.

‘Til the next time… keep Kinecting

Project Information URL:

Contact Information:

Follow @CH9
Follow @Coding4Fun
Follow @gduncan411

06 Dec 15:29

Google Released an AI That Analyzes Your Genome

by Chelsea Gohd

Genome Analysis

In the 15 years since the human genome was first sequenced in a historic scientific achievement, genomic sequencing has become relatively routine, with huge genomes being sequenced at incredible speeds. However, sorting through nucleotides and making educated guesses about their use can only get us so far. On December 4, Google released a tool that may help: DeepVariant, which utilizes artificial intelligence (AI) techniques and machine learning to more accurately build a picture of a person’s genome from sequencing data.

Machine learning is an application of AI that allows systems to improve without external programming or interference. By automatically identifying small insertion and deletion mutations and single base pair mutations, identified by a rapid method of genetic analysis known as high-throughput sequencing, Google’s new AI can reportedly create an accurate picture of a full genome with little effort.

Brad Chapman, a research scientist at Harvard’s School of Public Health who tested an early version of DeepVariant, told MIT Technology Review that one of the difficulties in other sequencing programs lies “in difficult parts of the genome, where each of the [tools] has strengths and weaknesses. These difficult regions are increasingly important for clinical sequencing, and it’s important to have multiple methods.”

Applying Knowledge

In the early 2000’s, when genome sequencing became widely available for the first time, scientists lacked the ability to interpret the data being collected. DNA could be sequenced, but analysis of these large datasets led to inaccurate and incomplete genome pictures.

Since then, technologies and techniques have continued to improve. Google’s advanced analysis capability reportedly goes even further beyond what has before been capable. Existing sequence-interpreting tools typically identify mutations by ruling out read errors, but DeepVariant’s method is said to paint a more accurate picture.

To avoid the errors produced by other methods of high-throughput sequencing, the Google Brain team that developed DeepVariant fed their deep-learning system data from millions of high-throughput sequences as well as fully sequences genomes. They then continued to adjust their model until the system could interpret sequenced data with high accuracy.

Brendan Frey, CEO of AI health software company Deep Genomics, told Tech Review that, “The success of DeepVariant is important because it demonstrates that in genomics, deep learning can be used to automatically train systems that perform better than complicated hand-engineered systems.” 

Bioprinting: How 3D Printing is Changing Medicine
Click to View Full Infographic

Even greater importance of such a tool may lie in its applications. A variety of diseases, ranging from cancers to diabetes to heart disease, are known to be genetically linked.

Medical professionals already take family history into account when diagnosing a condition; if they one day had access to your sequenced genome, analyzed by an AI capable of running through it quickly and accurately, they might be able to more accurately provide you with information about yourself and what you are at risk of.

A doctor could also more accurately prescribe treatment for the diseases that you already have — which is especially relevant in diseases like cancer.

This development is yet another step towards a future in which medicine is truly personal, and each patient is treated with such variations in mind.

The post Google Released an AI That Analyzes Your Genome appeared first on Futurism.

01 Dec 08:55

Elle personnalise une figurine Funko Pop pour en faire un CV créatif

by Claire L.

Étudiante à Sup de Pub, Marion Roby a eu l’idée de construire un CV créatif qui prend la forme d’une figurine Funko Pop à son effigie.

Si la pop culture n’a aucun secret pour vous, alors vous connaissez sûrement les Funko Pop, ces personnages qui rendent hommage aux différents protagonistes de séries, de films, de jeux vidéo, de comics ou encore de mangas. Pour ceux qui ne connaissant pas, les figurines Funko Pop arborent un design atypique, avec une tête surdimensionnée par rapport à leur corps et des yeux globuleux. Et si nous vous parlons de tout ça, c’est parce que nous avons reçu un CV créatif qui vaut sérieusement le détour…

Marion Roby est directrice artistique freelance et étudiante à Sup de Pub. À la recherche d’une nouvelle expérience professionnelle, cette jeune créative a eu l’idée de construire un CV qui prend la forme d’une figurine Funko Pop. Elle a donc personnalisé un personnage qui reproduit son apparence et a confectionné un packaging avec son nom, son âge, le poste recherché et ses expériences évidemment au verso !

Une idée originale et créative qui prouve, une fois de plus, que tout n’a pas été testé et qu’on peut encore innover en matière de CVs créatifs. Et si le sujet vous intéresse, on vous invite à (re)découvrir l’idée de ces deux étudiants belges qui ont créé un morceau de rap en guise de CV.

Crédits : Marion Roby

Crédits : Marion Roby

Crédits : Marion Roby

Crédits : Marion Roby

Crédits : Marion Roby

Crédits : Marion Roby

Imaginé par : Marion Roby

Cet article Elle personnalise une figurine Funko Pop pour en faire un CV créatif provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

30 Nov 21:04

Google is making a computer vision kit for Raspberry Pi

by Adi Robertson

Google is offering a new way for Raspberry Pi tinkerers to use its AI tools. It just announced the AIY Vision Kit, which includes a new circuit board and computer vision software that buyers can pair with their own Raspberry Pi computer and camera. (There’s also a cute cardboard box included, along with some supplementary accessories.) The kit costs $44.99 and will ship through Micro Center on December 31st.

The AIY Vision Kit’s software includes three neural network models: one that recognizes a thousand common objects; one that recognizes faces and expressions; and a “a person, cat and dog detector.” Users can train their own models with Google’s TensorFlow machine learning software.

Google touts this as a cheap and simple computer...

Continue reading…

30 Nov 13:42

Amazon DeepLens : une caméra intégrant une Intelligence Artificielle pour les entreprises

by Vincent Bouvier
Amazon DeepLens

Lors de sa conférence AWS re:Invent, Amazon a présenté ses solutions pour démocratiser l’Intelligence Artificielle et le Machine Learning auprès des entreprises. Au coeur du projet, une caméra baptisée la DeepLens, une plateforme de Machine Learning appelée SageMaker et des services pour la traduction et la transcription grâce à l’Intelligence Artificielle. Le grand public connaît […]

Source : Amazon DeepLens : une caméra intégrant une Intelligence Artificielle pour les entreprises

29 Nov 09:55

High-Speed Drones Use AI to Spoil the Fun

by Dan Maloney

Some people look forward to the day when robots have taken over all our jobs and given us an economy where we can while our days away on leisure activities. But if your idea of play is drone racing, you may be out of luck if this AI pilot for high-speed racing drones has anything to say about it.

NASA’s Jet Propulsion Lab has been working for the past two years to develop the algorithms needed to let high-performance UAVs navigate typical drone racing obstacles, and from the look of the tests in the video below, they’ve made a lot of progress. The system is vision based, with the AI drones equipped with wide-field cameras looking both forward and down. The indoor test course has seemingly random floor tiles scattered around, which we guess provide some kind of waypoints for the drones. A previous video details a little about the architecture, and it seems the drones are doing the computer vision on-board, which we find pretty impressive.

Despite the program being bankrolled by Google, we’re sure no evil will come of this, and that we’ll be in no danger of being chased down by swarms of high-speed flying killbots anytime soon. For now we can take solace in the fact that JPL’s algorithms still can’t beat an elite human pilot like [Ken Loo], who bested the bots overall. But alarmingly, the human did no better than the bots on his first lap, which suggests that once the AI gets a little creativity and intuition like that needed to best a Go champion, [Ken] might need to find another line of work.

Thanks for the heads up, [Caroline].

Filed under: drone hacks
28 Nov 22:36

Alexa developers can now use notifications, soon personalize apps based on users’ voices

by Sarah Perez
 Amazon says it will allow Alexa skill developers to alert customers using notifications starting today, and soon, it will allow them to recognize users’ individual voices as part of their skill-building process. These changes, along with other developer enhancements, are being announced this morning at Amazon’s re:Invent conference in Las Vegas, where the company delved into the… Read More
27 Nov 08:49

A robotic pair of hands you control with two fingers

Federico Ciccarese's €1800, 3D-printed Doublehand from Youbionic is a glove with 2 robotic hands that can be controlled with 2 fingers each...(Read...)

26 Nov 09:35

Mission Impossible: Infiltrating Furby

by Al Williams

Long before things “went viral” there was always a few “must have” toys each year that were in high demand. Cabbage Patch Kids, Transformers, or Teddy Ruxpin would cause virtual hysteria in parents trying to score a toy for a holiday gift. In 1998, that toy was a Furby — a sort of talking robot pet. You can still buy Furby, and as you might expect a modern one — a Furby Connect — is Internet-enabled and much smarter than previous versions. While the Furby has always been a target for good hacking, anything Internet-enabled can be a target for malicious hacking, as well. [Context Information Security] decided to see if they could take control of your kid’s robotic pet.

Thet Furby Connect’s path to the Internet is via BLE to a companion phone device. The phone, in turn, talks back to Hasbro’s (the toy’s maker) Amazon Web Service servers. The company sends out new songs, games, and dances. Because BLE is slow, the transfers occur in the background during normal toy operation.

Looking at BLE services, there was a common DFU service for uploading firmware and an interface for sending proprietary DLC files. They found an existing project that could send existing DLC files to the device and even replace audio in those files. However, the format of the DLC files appeared to be unknown outside of Hasbro.

Attacking the DLC files with a hex editor, some of it seemed pretty obvious. However, some of it was quite elusive. The post has a great blow-by-blow detail of the investigation and, as you can see in the video below, they were successful.

Hasbro didn’t seem too concerned about the security ramifications because an attacker would have to have proximity to the toy. It isn’t hard to think of cases where that’s not a great excuse, but we suppose it does cover the most common things you’d worry about.

We talked about the partial exploit of the Furby Connect earlier. If you have an older Furby in the attic, you can always turn it into your next robot.


Filed under: Toy Hacks
24 Nov 19:44

Black Friday : Boulanger, Darty et la Fnac offrent un Google Home Mini

by Vincent Bouvier
Test Google Home Mini

Si vous vous intéressez de près aux assistants vocaux, le Black Friday met en avant le Google Home Mini chez plusieurs marchands. Si vous dépensez plus de 300€ chez Darty, Boulanger ou la Fnac, ils vous offrent un Home Mini (valeur 59€). Le Google Home Mini est une version miniature du Home. Pour tout savoir […]

Source : Black Friday : Boulanger, Darty et la Fnac offrent un Google Home Mini

24 Nov 19:41

The Booze is Strong in This Stormtrooper Decanter

by Geeks are Sexy

The booze is strong in this one! This Stormtrooper glass decanter is modelled on the iconic helmet from the 1977 movie, Star Wars: A New Hope.

When they’re not being drummed on by a bunch of scruffy little Ewoks, Stormtrooper helmets make pretty classy drinks decanters.

Based on the iconic helmet moulded by Andrew Ainsworth at Shepperton Design Studios for the original 1977 film – this galactic carafe is made from premium ‘Super Flint Glass’, holds 750ml of delicious booze and is set to stun your party guests.

Find a comfy chair, whack on the Cantina band music and pour yourself a nice shot of Absith, or some cheap Imperial Vodka.

[The Original Stormtrooper Decanter]

The post The Booze is Strong in This Stormtrooper Decanter appeared first on Geeks are Sexy Technology News.

23 Nov 13:59

Differantly : ils dessinent avec un seul trait et à ce niveau c’est du génie

by Victor M.

Le duo d’artistes français Differantly (DFT) réalise un art surprenant : celui de dessiner avec un seul trait toutes sortes d’animaux et d’objets. Du génie !

Si comme nous, vous êtes en extase devant ces gens qui savent dessiner tout et n’importe quoi, alors vous allez adorer le talent de ce duo d’artistes baptisé Differantly (ou DFT). Leur art ? Dessiner avec une seule et unique ligne ! Sans jamais lever le crayon de la feuille de papier, ils arrivent à représenter toutes sortes d’animaux et d’objets.

Le plus amusant dans cet art insolite, c’est qu’au début on ne comprend pas forcément où les artistes veulent en venir en faisant sauter ou en tournant le stylo dans tous les sens. Puis la forme se dessine et notre cerveau fait les connexions pour déclencher l’étincelle qui donne tout son sens à l’oeuvre. Véritablement fans de leur gimmick créative, on a réalisé une petite vidéo sur notre page Facebook :

Vivant entre Paris et Berlin, Emma et Stéphane sont les deux artistes derrière ces dessins fascinants. Ils ont réussi à trouver leur différence (on se dit alors que Differantly n’a pas été choisi au hasard) et possèdent aujourd’hui une belle communauté de fans sur les réseaux sociaux, notamment sur leur compte Instagram suivi par plus de 200 000 personnes.

Les artistes collaborent avec de grandes marques, comme Adobe, Verizon, Adidas ou encore Nissan, notamment pour la mise en place d’opérations marketing artistiques. Pour en savoir plus sur ce duo talentueux,  rendez-vous sur leur site internet.

Differantly : ils dessinent avec un seul trait et à ce niveau c'est du génie

Crédits :DFT

Differantly : ils dessinent avec un seul trait et à ce niveau c'est du génie

Crédits :DFT

Differantly : ils dessinent avec un seul trait et à ce niveau c'est du génie

Crédits :DFT

Differantly : ils dessinent avec un seul trait et à ce niveau c'est du génie

Crédits :DFT

Differantly : ils dessinent avec un seul trait et à ce niveau c'est du génie

Crédits :DFT

Differantly : ils dessinent avec un seul trait et à ce niveau c'est du génie

Crédits :DFT

Differantly : ils dessinent avec un seul trait et à ce niveau c'est du génie

Crédits :DFT

Differantly : ils dessinent avec un seul trait et à ce niveau c'est du génie

Crédits : DFT

Imaginé par : DFT
Source :

Cet article Differantly : ils dessinent avec un seul trait et à ce niveau c’est du génie provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

16 Nov 23:30

Meural raises $5 million and brings Canvas to over 100 stores

by Darrell Etherington
 Digital art technology company Meural has raised a $5 million Series A round of funding, led by Corigin Ventures and with participation from Netgear, Resolute Venture Partners and assorted angels. The $5 million in fresh funds accompanies the news that Meural will now be distributing its Canvas in retail stores across the U.S., Canada, the U.K., Germany, France and the Netherlands. Read More
15 Nov 20:26

Google updates Assistant with new features and languages

by Chaim Gartenberg

Google has announced a bunch of new updates coming to Assistant today that should make it possible for developers to make more functional applications that better integrate with your Google Assistant devices.

One of the biggest additions is support for new languages. Developers will now be able to write apps in Spanish, Italian, Portuguese, and Indian English.

Another major update is the ability for developers to create applications that take advantage of having both a Google Home and a phone with Assistant, allowing Home devices to hand off requests to smartphones for completion of actions (like, say, paying for a sandwich you ordered on your Home). Google will also allow apps to recognize implicit requests, so that you...

Continue reading…

15 Nov 20:24

Rebuilding Tamriel: How Bethesda Reimagined Skyrim for VR

by David Jagneaux
Rebuilding Tamriel: How Bethesda Reimagined Skyrim for VR

The Elder Scrolls V: Skyrim VR is right around the corner with a release date of November 17th, 2017 — that’s only mere days away. With its launch this Friday it will easily rank as the largest, most detailed, and most elaborate game available in VR to date. With hundreds of hours of gameplay between the core campaign, side content, and three expansions players will be able to lose themselves once again in the frosted wastelands of Skyrim.

When Skyrim originally released all the way back in 2011 it was lauded as a landmark achievement for roleplaying games and its impact on the industry is still being felt to this day. Bethesda teamed up with Escalation Studios to port the title over to VR and rebuild many of the controls and interfacing options from the ground up. On PSVR you can play either with a Dualshock 4 gamepad or two PS Move controllers with full head-tracking and a host of movement options. Casting spells means controlling each hand individually, swinging the sword with your hand, and blocking attacks with a shield strapped to your arm. It’s Skyrim like you’ve never experienced it before.

We also got the chance to send over a handful of questions to the Lead Producer at Bethesda Game Studios, Andrew Scharf, before the game’s launch to learn more about its development and what it took to bring the vast world of Tamriel to VR for the very first time.

UploadVR: All of Skyrim is in VR, which is quite ambitious. What were the biggest challenges with porting the game to a new format like VR?

Andrew Scharf: PlayStation VR games need to be running at 60 fps at all times, otherwise it can be an uncomfortable experience for the player.  We’re working with a great team at Escalation Studios, who are among the best VR developers in the industry and with their help, we were able to not only get the game running smoothly, but redesign and shape Skyrim’s mechanics to feel good in VR.

It’s definitely a challenge figuring out the best way to display important information in VR. Take the World Map and the Skills menu for example — we wanted to give the player an immersive 360 degree view when choosing where to travel, or viewing the constellations and deciding which perk to enable.  For the HUD, we needed to make sure important information was in an area where the player could quickly refer to it, while also preventing the player from feeling claustrophobic by being completely surrounded by user interface elements.

UploadVR: What are the major differences with developing for PSVR and HTC Vive?

Andrew Scharf: The major difference that took iteration is the single camera that the PSVR uses, versus the HTC Vive base stations.  In order to ensure an optimal VR experience, the PS Move Controllers need to be in view of the camera which means players need to always be facing in that direction.  Early on, we found that this was a bit of a challenge – players would put on the headset and then turn all the way around and start going in a random direction.  One solution to help keep players facing the right way was to anchor important UI elements so if you can see the compass in front of you, you’re facing in the right direction.

The PlayStation Move Controllers also has several buttons while the HTC Vive Controllers primarily have a multi-function trackpad, so figuring out input and control schemes that felt natural to the platform was tricky, but we feel really good where we’ve landed with controls and locomotion for both Skyrim and Fallout VR.

UploadVR: What are some of the ways that you think VR adds to the game? For example, in my last demo deflecting arrows with my shield felt really, really satisfying. How has VR redefined how you enjoy Skyrim?

Andrew Scharf: It’s a huge perspective shift which completely changes how you approach playing the game.  Part of the fun of making combat feel natural in VR is now you have some tricks up your sleeve that you didn’t have before.  You can fire the bow and arrow as fast and you’re able to nock and release, lean around corners to peek at targets, shield bash with your left hand while swinging a weapon with your right, and my favorite is being able to attack two targets at the same time with weapons or spells equipped in each hand.

You can also precisely control where you pick up and drop objects, and in general be able to interact with the world a little more how you’d expect – so like me, you’re probably going to take this skill and use it to obsessively organize your house.

UploadVR: Were there ever considerations to add voice recognition for talking with NPCs? Selecting floating words as dialog choices breaks the immersion a little bit.

Andrew Scharf: We thought about a lot of different options for new features and interfaces during development. In the end we chose to prioritize making the game feel great in VR. At this time, voice recognition for dialogue is not included in the game.

UploadVR: Was smooth locomotion with PS Move something that was always planned for the game, or was it added after feedback from fans at E3?

Andrew Scharf: There were a bunch of options we were considering from the very beginning of development.  We wanted to ensure that people who were susceptible to VR motion sickness could still experience the world of Skyrim comfortably, so we focused on new systems we would have to add (like our teleportation movement scheme) to help alleviate any tolerance issues first.

For smooth locomotion, there’s a good number of us here who spend a lot of time playing VR games and see what works well and what doesn’t, but ultimately it came down to figuring out the best approach for us.  There were unique challenges with Skyrim that we had to iterate on, from having both main and offhand weapons, the design of the PlayStation Move controllers, long-term play comfort, and ultimately, making sure you can still play Skyrim in whichever playstyle you prefer.

UploadVR: Mods won’t be in the game at launch, but what about the future? Do you want to bring other Elder Scrolls games into VR? And what about Fallout 4 VR on PSVR?

Andrew Scharf: We’ve definitely learned a lot, but as far as what future features or titles we will or won’t bring to VR, that remains to be seen. For now our focus is on launching and supporting the VR versions of Skyrim and Fallout 4.

Our goal with all our VR titles is to bring it to as many platforms as possible.  When and if we have more information to share we will let everyone know.

During a short “Making Of” video, Bethesda revealed more details about Skyrim VR’s development and some of the updates they made to bring it to life once again:

We’ll have more details on Skyrim VR very soon — including a full review and livestream tomorrow. You can also read more about Bethesda’s approach to VR in this interview with the company’s VP of Marketing, Pete Hines. Skyrim VR releases for PSVR this Friday, November 17th. For more impressions you can read about our latest hands-on demo and watch actual gameplay footage. Let us know what you think — and any questions you might have — down in the comments below!

Tagged with: Bethesda, Skyrim VR, The Elder Scrolls

14 Nov 23:19

Announcing TensorFlow Lite

by Google Devs
Posted by the TensorFlow team
Today, we're happy to announce the developer preview of TensorFlow Lite, TensorFlow’s lightweight solution for mobile and embedded devices! TensorFlow has always run on many platforms, from racks of servers to tiny IoT devices, but as the adoption of machine learning models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models.

It is designed from scratch to be:
  • Lightweight Enables inference of on-device machine learning models with a small binary size and fast initialization/startup
  • Cross-platform A runtime designed to run on many different platforms, starting with Android and iOS
  • Fast Optimized for mobile devices, including dramatically improved model loading times, and supporting hardware acceleration
More and more mobile devices today incorporate purpose-built custom hardware to process ML workloads more efficiently. TensorFlow Lite supports the Android Neural Networks API to take advantage of these new accelerators as they come available.
TensorFlow Lite falls back to optimized CPU execution when accelerator hardware is not available, which ensures your models can still run fast on a large set of devices.


The following diagram shows the architectural design of TensorFlow Lite:
The individual components are:
  • TensorFlow Model: A trained TensorFlow model saved on disk.
  • TensorFlow Lite Converter: A program that converts the model to the TensorFlow Lite file format.
  • TensorFlow Lite Model File: A model file format based on FlatBuffers, that has been optimized for maximum speed and minimum size.
The TensorFlow Lite Model File is then deployed within a Mobile App, where:
  • Java API: A convenience wrapper around the C++ API on Android
  • C++ API: Loads the TensorFlow Lite Model File and invokes the Interpreter. The same library is available on both Android and iOS
  • Interpreter: Executes the model using a set of operators. The interpreter supports selective operator loading; without operators it is only 70KB, and 300KB with all the operators loaded. This is a significant reduction from the 1.5M required by TensorFlow Mobile (with a normal set of operators).
  • On select Android devices, the Interpreter will use the Android Neural Networks API for hardware acceleration, or default to CPU execution if none are available.
Developers can also implement custom kernels using the C++ API, that can be used by the Interpreter.


TensorFlow Lite already has support for a number of models that have been trained and optimized for mobile:
  • MobileNet: A class of vision models able to identify across 1000 different object classes, specifically designed for efficient execution on mobile and embedded devices
  • Inception v3: An image recognition model, similar in functionality to MobileNet, that offers higher accuracy but also has a larger size
  • Smart Reply: An on-device conversational model that provides one-touch replies to incoming conversational chat messages. First-party and third-party messaging apps use this feature on Android Wear.
Inception v3 and MobileNets have been trained on the ImageNet dataset. You can easily retrain these on your own image datasets through transfer learning.

What About TensorFlow Mobile?

As you may know, TensorFlow already supports mobile and embedded deployment of models through the TensorFlow Mobile API. Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices. With this announcement, TensorFlow Lite is made available as a developer preview, and TensorFlow Mobile is still there to support production apps.
The scope of TensorFlow Lite is large and still under active development. With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models. We plan to prioritize future functional expansion based on the needs of our users. The goals for our continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices.
We are excited that developers are getting their hands on TensorFlow Lite. We plan to support and address our external community with the same intensity as the rest of the TensorFlow project. We can't wait to see what you can do with TensorFlow Lite.
For more information, check out the TensorFlow Lite documentation pages.
Stay tuned for more updates.
Happy TensorFlow Lite coding!
13 Nov 21:32

Boston Dynamics’ latest robot dog is slightly less terrifying

by Nick Statt

Robot maker Boston Dynamics, now owned by Japanese telecom and tech giant SoftBank, just published a short YouTube clip featuring a new, more advanced version of its SpotMini robot. SpotMini, first unveiled in June 2016, started out as a giraffe-looking chore bot that was pretty terrible at performing tasks around the house, and, in one short clip, hilariously ate it on a cluster of banana peels like a character straight out of a slapstick cartoon.

The new SpotMini looks much more polished and less grotesque, like a real-life cross between a Pixar animation and a robot out of a Neill Blomkamp vision of the future, thanks in part to series of bright yellow plates covering its legs and body. The new bot’s movement also looks incredibly...

Continue reading…

13 Nov 14:20

Yummy Japanese Animation Meals In Real Life

by Pierre

La japonaise En reproduit dans la vraie vie tous les plats et sucreries mis en scènes dans l’univers des studios Ghibli. Cette instagrameuse, suivie par plus de 80 000 personnes, est une cuisinière bien réelle qui puise son inspiration dans les fameux dessins animés japonais au succès mondial, du Chateau Dans Le Ciel à Mon Voisin Totoro. Une façon aussi ludique qu’alléchante de donner vie aux classiques.


12 Nov 19:53

Simon's Cat's Guide to Birthdays

Watch Simon's Cat's Guide To Birthdays!..(Read...)

08 Nov 17:52

Microsoft integrates LinkedIn with Word to help you write a resume

by Tom Warren

Microsoft acquired LinkedIn for $26 billion last year, promising to closely link the service with its Office suite of applications. While we’ve seen a new Windows 10 app for LinkedIn, Microsoft is unveiling an even more useful addition for its service: Resume Assistant. Office 365 subscribers will now get direct LinkedIn integration when they’re building a resume in Word.

The assistant works by picking out job descriptions in an existing resume and finding similar public examples on LinkedIn to help job seekers curate a better description. While you could simply copy the descriptions, Microsoft is only surfacing them in a side section in Word and not allowing users to simply drag and drop them into documents.


Continue reading…

07 Nov 22:15

WANT: Back to the Future Mr. Fusion Car Charger

by Geeks are Sexy

An officially licensed Back to the Future Mr. Fusion Car Charger by the folks from Thinkgeek. This thing does flip open like in the movie, but please, do not throw garbage inside!

The Back to the Future Mr. Fusion Car Charger generates enough power to charge your devices via your 12V vehicle power adapter (cigarette lighter). You won’t be required to load it up with garbage, luckily. The trade off from that inconvenience is that you can’t time travel, though. Maybe in the next product run we can add that in.

[Back to the Future Mr. Fusion Car Charger]

The post WANT: Back to the Future Mr. Fusion Car Charger appeared first on Geeks are Sexy Technology News.

04 Nov 08:36

Meural’s upgraded Canvas is a surprisingly awesome showcase for art

by Darrell Etherington
 Meural’s second-generation Canvas digital art display is now available, and I’ve been testing one out for the past couple of weeks to see how it stacks up. This is my first experience with any kind of digital canvas product, and I have to admit I had very low expectations going into it – but the Meural is actually an outstanding gadget, provided you have the means to commit… Read More
01 Nov 18:48

Having trouble sleeping? Take a power nap anywhere with Silentmode audio mask

by Lulu Chang

Can't fall to sleep anywhere? With the Silentmode, you'll be able to sleep just about everywhere. The new audio mask promises "100 percent blackout" so you can completely tune out the world, both visually and aurally.

The post Having trouble sleeping? Take a power nap anywhere with Silentmode audio mask appeared first on Digital Trends.

01 Nov 18:13

Concealed Compartment Shelf

by staff

Protect your valuables by hiding them in plain sight inside of this concealed compartment shelf. It features an ample 23.5 x 3 x 7.5 inch compartment that easily locks and unlocks using the included RFID key and token – no combinations necessary.

Check it out