Shared posts

15 Nov 20:24

Rebuilding Tamriel: How Bethesda Reimagined Skyrim for VR

by David Jagneaux
Rebuilding Tamriel: How Bethesda Reimagined Skyrim for VR

The Elder Scrolls V: Skyrim VR is right around the corner with a release date of November 17th, 2017 — that’s only mere days away. With its launch this Friday it will easily rank as the largest, most detailed, and most elaborate game available in VR to date. With hundreds of hours of gameplay between the core campaign, side content, and three expansions players will be able to lose themselves once again in the frosted wastelands of Skyrim.

When Skyrim originally released all the way back in 2011 it was lauded as a landmark achievement for roleplaying games and its impact on the industry is still being felt to this day. Bethesda teamed up with Escalation Studios to port the title over to VR and rebuild many of the controls and interfacing options from the ground up. On PSVR you can play either with a Dualshock 4 gamepad or two PS Move controllers with full head-tracking and a host of movement options. Casting spells means controlling each hand individually, swinging the sword with your hand, and blocking attacks with a shield strapped to your arm. It’s Skyrim like you’ve never experienced it before.

We also got the chance to send over a handful of questions to the Lead Producer at Bethesda Game Studios, Andrew Scharf, before the game’s launch to learn more about its development and what it took to bring the vast world of Tamriel to VR for the very first time.

UploadVR: All of Skyrim is in VR, which is quite ambitious. What were the biggest challenges with porting the game to a new format like VR?

Andrew Scharf: PlayStation VR games need to be running at 60 fps at all times, otherwise it can be an uncomfortable experience for the player.  We’re working with a great team at Escalation Studios, who are among the best VR developers in the industry and with their help, we were able to not only get the game running smoothly, but redesign and shape Skyrim’s mechanics to feel good in VR.

It’s definitely a challenge figuring out the best way to display important information in VR. Take the World Map and the Skills menu for example — we wanted to give the player an immersive 360 degree view when choosing where to travel, or viewing the constellations and deciding which perk to enable.  For the HUD, we needed to make sure important information was in an area where the player could quickly refer to it, while also preventing the player from feeling claustrophobic by being completely surrounded by user interface elements.

UploadVR: What are the major differences with developing for PSVR and HTC Vive?

Andrew Scharf: The major difference that took iteration is the single camera that the PSVR uses, versus the HTC Vive base stations.  In order to ensure an optimal VR experience, the PS Move Controllers need to be in view of the camera which means players need to always be facing in that direction.  Early on, we found that this was a bit of a challenge – players would put on the headset and then turn all the way around and start going in a random direction.  One solution to help keep players facing the right way was to anchor important UI elements so if you can see the compass in front of you, you’re facing in the right direction.

The PlayStation Move Controllers also has several buttons while the HTC Vive Controllers primarily have a multi-function trackpad, so figuring out input and control schemes that felt natural to the platform was tricky, but we feel really good where we’ve landed with controls and locomotion for both Skyrim and Fallout VR.

UploadVR: What are some of the ways that you think VR adds to the game? For example, in my last demo deflecting arrows with my shield felt really, really satisfying. How has VR redefined how you enjoy Skyrim?

Andrew Scharf: It’s a huge perspective shift which completely changes how you approach playing the game.  Part of the fun of making combat feel natural in VR is now you have some tricks up your sleeve that you didn’t have before.  You can fire the bow and arrow as fast and you’re able to nock and release, lean around corners to peek at targets, shield bash with your left hand while swinging a weapon with your right, and my favorite is being able to attack two targets at the same time with weapons or spells equipped in each hand.

You can also precisely control where you pick up and drop objects, and in general be able to interact with the world a little more how you’d expect – so like me, you’re probably going to take this skill and use it to obsessively organize your house.

UploadVR: Were there ever considerations to add voice recognition for talking with NPCs? Selecting floating words as dialog choices breaks the immersion a little bit.

Andrew Scharf: We thought about a lot of different options for new features and interfaces during development. In the end we chose to prioritize making the game feel great in VR. At this time, voice recognition for dialogue is not included in the game.

UploadVR: Was smooth locomotion with PS Move something that was always planned for the game, or was it added after feedback from fans at E3?

Andrew Scharf: There were a bunch of options we were considering from the very beginning of development.  We wanted to ensure that people who were susceptible to VR motion sickness could still experience the world of Skyrim comfortably, so we focused on new systems we would have to add (like our teleportation movement scheme) to help alleviate any tolerance issues first.

For smooth locomotion, there’s a good number of us here who spend a lot of time playing VR games and see what works well and what doesn’t, but ultimately it came down to figuring out the best approach for us.  There were unique challenges with Skyrim that we had to iterate on, from having both main and offhand weapons, the design of the PlayStation Move controllers, long-term play comfort, and ultimately, making sure you can still play Skyrim in whichever playstyle you prefer.

UploadVR: Mods won’t be in the game at launch, but what about the future? Do you want to bring other Elder Scrolls games into VR? And what about Fallout 4 VR on PSVR?

Andrew Scharf: We’ve definitely learned a lot, but as far as what future features or titles we will or won’t bring to VR, that remains to be seen. For now our focus is on launching and supporting the VR versions of Skyrim and Fallout 4.

Our goal with all our VR titles is to bring it to as many platforms as possible.  When and if we have more information to share we will let everyone know.


During a short “Making Of” video, Bethesda revealed more details about Skyrim VR’s development and some of the updates they made to bring it to life once again:

We’ll have more details on Skyrim VR very soon — including a full review and livestream tomorrow. You can also read more about Bethesda’s approach to VR in this interview with the company’s VP of Marketing, Pete Hines. Skyrim VR releases for PSVR this Friday, November 17th. For more impressions you can read about our latest hands-on demo and watch actual gameplay footage. Let us know what you think — and any questions you might have — down in the comments below!

Tagged with: Bethesda, Skyrim VR, The Elder Scrolls

14 Nov 23:19

Announcing TensorFlow Lite

by Google Devs
Posted by the TensorFlow team
Today, we're happy to announce the developer preview of TensorFlow Lite, TensorFlow’s lightweight solution for mobile and embedded devices! TensorFlow has always run on many platforms, from racks of servers to tiny IoT devices, but as the adoption of machine learning models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models.

It is designed from scratch to be:
  • Lightweight Enables inference of on-device machine learning models with a small binary size and fast initialization/startup
  • Cross-platform A runtime designed to run on many different platforms, starting with Android and iOS
  • Fast Optimized for mobile devices, including dramatically improved model loading times, and supporting hardware acceleration
More and more mobile devices today incorporate purpose-built custom hardware to process ML workloads more efficiently. TensorFlow Lite supports the Android Neural Networks API to take advantage of these new accelerators as they come available.
TensorFlow Lite falls back to optimized CPU execution when accelerator hardware is not available, which ensures your models can still run fast on a large set of devices.

Architecture

The following diagram shows the architectural design of TensorFlow Lite:
The individual components are:
  • TensorFlow Model: A trained TensorFlow model saved on disk.
  • TensorFlow Lite Converter: A program that converts the model to the TensorFlow Lite file format.
  • TensorFlow Lite Model File: A model file format based on FlatBuffers, that has been optimized for maximum speed and minimum size.
The TensorFlow Lite Model File is then deployed within a Mobile App, where:
  • Java API: A convenience wrapper around the C++ API on Android
  • C++ API: Loads the TensorFlow Lite Model File and invokes the Interpreter. The same library is available on both Android and iOS
  • Interpreter: Executes the model using a set of operators. The interpreter supports selective operator loading; without operators it is only 70KB, and 300KB with all the operators loaded. This is a significant reduction from the 1.5M required by TensorFlow Mobile (with a normal set of operators).
  • On select Android devices, the Interpreter will use the Android Neural Networks API for hardware acceleration, or default to CPU execution if none are available.
Developers can also implement custom kernels using the C++ API, that can be used by the Interpreter.

Models

TensorFlow Lite already has support for a number of models that have been trained and optimized for mobile:
  • MobileNet: A class of vision models able to identify across 1000 different object classes, specifically designed for efficient execution on mobile and embedded devices
  • Inception v3: An image recognition model, similar in functionality to MobileNet, that offers higher accuracy but also has a larger size
  • Smart Reply: An on-device conversational model that provides one-touch replies to incoming conversational chat messages. First-party and third-party messaging apps use this feature on Android Wear.
Inception v3 and MobileNets have been trained on the ImageNet dataset. You can easily retrain these on your own image datasets through transfer learning.

What About TensorFlow Mobile?

As you may know, TensorFlow already supports mobile and embedded deployment of models through the TensorFlow Mobile API. Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices. With this announcement, TensorFlow Lite is made available as a developer preview, and TensorFlow Mobile is still there to support production apps.
The scope of TensorFlow Lite is large and still under active development. With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models. We plan to prioritize future functional expansion based on the needs of our users. The goals for our continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices.
We are excited that developers are getting their hands on TensorFlow Lite. We plan to support and address our external community with the same intensity as the rest of the TensorFlow project. We can't wait to see what you can do with TensorFlow Lite.
For more information, check out the TensorFlow Lite documentation pages.
Stay tuned for more updates.
Happy TensorFlow Lite coding!
13 Nov 21:32

Boston Dynamics’ latest robot dog is slightly less terrifying

by Nick Statt

Robot maker Boston Dynamics, now owned by Japanese telecom and tech giant SoftBank, just published a short YouTube clip featuring a new, more advanced version of its SpotMini robot. SpotMini, first unveiled in June 2016, started out as a giraffe-looking chore bot that was pretty terrible at performing tasks around the house, and, in one short clip, hilariously ate it on a cluster of banana peels like a character straight out of a slapstick cartoon.

The new SpotMini looks much more polished and less grotesque, like a real-life cross between a Pixar animation and a robot out of a Neill Blomkamp vision of the future, thanks in part to series of bright yellow plates covering its legs and body. The new bot’s movement also looks incredibly...

Continue reading…

13 Nov 14:20

Yummy Japanese Animation Meals In Real Life

by Pierre

La japonaise En reproduit dans la vraie vie tous les plats et sucreries mis en scènes dans l’univers des studios Ghibli. Cette instagrameuse, suivie par plus de 80 000 personnes, est une cuisinière bien réelle qui puise son inspiration dans les fameux dessins animés japonais au succès mondial, du Chateau Dans Le Ciel à Mon Voisin Totoro. Une façon aussi ludique qu’alléchante de donner vie aux classiques.

 

12 Nov 19:53

Simon's Cat's Guide to Birthdays

Watch Simon's Cat's Guide To Birthdays!..(Read...)

08 Nov 17:52

Microsoft integrates LinkedIn with Word to help you write a resume

by Tom Warren

Microsoft acquired LinkedIn for $26 billion last year, promising to closely link the service with its Office suite of applications. While we’ve seen a new Windows 10 app for LinkedIn, Microsoft is unveiling an even more useful addition for its service: Resume Assistant. Office 365 subscribers will now get direct LinkedIn integration when they’re building a resume in Word.

The assistant works by picking out job descriptions in an existing resume and finding similar public examples on LinkedIn to help job seekers curate a better description. While you could simply copy the descriptions, Microsoft is only surfacing them in a side section in Word and not allowing users to simply drag and drop them into documents.

Image:...

Continue reading…

07 Nov 22:15

WANT: Back to the Future Mr. Fusion Car Charger

by Geeks are Sexy

An officially licensed Back to the Future Mr. Fusion Car Charger by the folks from Thinkgeek. This thing does flip open like in the movie, but please, do not throw garbage inside!

The Back to the Future Mr. Fusion Car Charger generates enough power to charge your devices via your 12V vehicle power adapter (cigarette lighter). You won’t be required to load it up with garbage, luckily. The trade off from that inconvenience is that you can’t time travel, though. Maybe in the next product run we can add that in.

[Back to the Future Mr. Fusion Car Charger]

The post WANT: Back to the Future Mr. Fusion Car Charger appeared first on Geeks are Sexy Technology News.

04 Nov 08:36

Meural’s upgraded Canvas is a surprisingly awesome showcase for art

by Darrell Etherington
 Meural’s second-generation Canvas digital art display is now available, and I’ve been testing one out for the past couple of weeks to see how it stacks up. This is my first experience with any kind of digital canvas product, and I have to admit I had very low expectations going into it – but the Meural is actually an outstanding gadget, provided you have the means to commit… Read More
01 Nov 20:30

Juste un autre Halloween

by CommitStrip

01 Nov 18:55

Sony’s new Aibo robot dog is much smarter than before and ‘loves anything pink’

by Trevor Mogg

Remember Aibo, Sony's robot dog? After years in the workshop, the Japanese company has just unveiled an all-new design, one that's much cuter and far smarter than its predecessor. Oh, and it "loves anything pink."

The post Sony’s new Aibo robot dog is much smarter than before and ‘loves anything pink’ appeared first on Digital Trends.

01 Nov 18:48

Having trouble sleeping? Take a power nap anywhere with Silentmode audio mask

by Lulu Chang

Can't fall to sleep anywhere? With the Silentmode, you'll be able to sleep just about everywhere. The new audio mask promises "100 percent blackout" so you can completely tune out the world, both visually and aurally.

The post Having trouble sleeping? Take a power nap anywhere with Silentmode audio mask appeared first on Digital Trends.

01 Nov 18:14

Ceres Imaging scores $2.5M to bring machine learning-powered insights to farmers

by Lucas Matney
 Agtech has largely seemed underserved by emerging startups, though farmers have largely proven more receptive to adopting new tech than most might assume. Ceres Imaging has a fairly straightforward pitch. Pay for a low-flying plane to snap shots of your farm with spectral cameras and proprietary sensors, and soon after get delivered insights that can help farmers determine water and… Read More
01 Nov 18:13

Concealed Compartment Shelf

by staff

Protect your valuables by hiding them in plain sight inside of this concealed compartment shelf. It features an ample 23.5 x 3 x 7.5 inch compartment that easily locks and unlocks using the included RFID key and token – no combinations necessary.

Check it out

$234.99

30 Oct 22:14

This is What Apple Thought Today’s Computers Would Look Like

by Futurism

Here’s the vision Apple had for the “computer of the future.”

The post This is What Apple Thought Today’s Computers Would Look Like appeared first on Futurism.

29 Oct 19:29

Sophia the lifelike robot is now a citizen – does she still want to kill us all?

by Mark Austin

In a bizarre publicity stunt, a lifelike android from Hanson Robotics named Sophia was just granted citizenship in Saudi Arabia, despite her past claims that she wants to exterminate the human race.

The post Sophia the lifelike robot is now a citizen – does she still want to kill us all? appeared first on Digital Trends.

28 Oct 13:33

Jibo the robot has finally gone on sale for $900

by Drew Prindle

Jibo, "the world's first family robot" is a sensor-studded household bot designed to help you out with everyday tasks and help keep your life organized. It's also a bit on the creepy side.

The post Jibo the robot has finally gone on sale for $900 appeared first on Digital Trends.

28 Oct 08:00

Walmart is using shelf-scanning robots to audit its stores

by James Vincent

Robots are already a common sight in warehouses (Amazon alone use more than 45,000) but now they’re moving into stores too. Walmart has announced it’s deploying shelf-scanning bots in 50 locations around the US, using the machines to check things like inventory, prices, and misplaced items. The retailing giant says the robots’ introduction won’t lead to job losses, and that the company wants to save employees from carrying out tasks that are “repeatable, predictable, and manual.”

The robots themselves are produced by California-based Bossa Nova Robotics, and are about two-feet tall with an extendable tower containing lights and sensors for scanning shelves. They sit in recharging stations in the store until a human employee gives them a...

Continue reading…

25 Oct 21:47

Microsoft kills off Kinect, stops manufacturing it

by Tom Warren

Microsoft is finally admitting Kinect is truly dead. After years of debate over whether Kinect is truly dead or not, the software giant has now stopped manufacturing the accessory. Fast Co Design reports that the depth camera and microphone accessory has sold around 35 million units since its debut in November, 2010. Microsoft’s Kinect for Xbox 360 even became the fastest-selling consumer device back in 2011, winning recognition from Guinness World Records at the time.

In the years since its debut on Xbox 360, a community built up around Microsoft’s Kinect. It was popular among hackers looking to create experiences that tracked body movement and sensed depth. Microsoft even tried to bring Kinect even more mainstream with the Xbox One,...

Continue reading…

25 Oct 21:33

Microsoft détaille son plan de migration de Skype for Business vers Teams

Depuis la dernière conférence Ignite, il ne fait plus aucun doute que Teams (concurrent maison de Slack) sera la seule et unique solution de messagerie à terme pour les entreprises. Il lui manque cependant de nombreuses fonctionn...
24 Oct 19:11

Google launches native add-ons for Gmail

by Micah Singleton

Earlier this year, Google announced it would allow add-ons for Gmail, and the time has finally come. Launching today, add-ons will allow third-party developers to integrate their services with Gmail directly. The first partners include Asana, Trello, DialPad, Intuit QuickBooks, and Wrike.

The addition will help enterprise users save a bit of time by not requiring them switch apps constantly. A DocuSign add-on is coming soon, which should be pretty helpful. Right now, Google has only added business-facing add-ons, and it’s unclear when more consumer-facing companies will be able to take advantage of the new Gmail capabilities.

Gmail add-ons are currently supported on the web and on Android, but Google hasn’t said when iOS...

Continue reading…

20 Oct 14:25

Artificial Intelligence at the Top of a Professional Sport

by Lauren Faris

The lights dim and the music swells as an elite competitor in a silk robe passes through a cheering crowd to take the ring. It’s a blueprint familiar to boxing, only this pugilist won’t be throwing punches.

OpenAI created an AI bot that has beaten the best players in the world at this year’s International championship. The International is an esports competition held annually for Dota 2, one of the most competitive multiplayer online battle arena (MOBA) games.

Each match of the International consists of two 5-player teams competing against each other for 35-45 minutes. In layman’s terms, it is an online version of capture the flag. While the premise may sound simple, it is actually one of the most complicated and detailed competitive games out there. The top teams are required to practice together daily, but this level of play is nothing new to them. To reach a professional level, individual players would practice obscenely late, go to sleep, and then repeat the process. For years. So how long did the AI bot have to prepare for this competition compared to these seasoned pros? A couple of months.

So, What Were the Results?

Normally, a professional Dota 2 game is played on a stage with 5v5 teams. This was the bot’s first competition, and the AI only had a couple of months to learn how to play Dota 2 completely from the ground up. It seemed more fair to start off simple with 1v1 matches. Those first matches were against [Dendi], one of the top players in the world, who lost to the bot shown below in the first match within about ten minutes, resigned in the second match, and then declined to play the third.

The OpenAI team didn’t use imitation learning to train the bot. Instead, it was put up against an exact copy of itself starting with the very first match it played. This continued, nonstop, for months. The bot was constantly improving against itself, and in turn it would have to try that much harder to win. This vigorous training clearly paid off.

While the 1v1 results are stellar, the bot has not had enough time to learn how to work in a cohesive manner with 4 other copies of itself to make a true Dota 2 team. After the roaring success of the International, the next step for OpenAI is to form an ultimate 5 bot team. We think it will be possible to beat the top players next year and we’re eager to see how long that takes.

What Does OpenAI Like to do When it is Not Busy Crushing Video Game Competition?

OpenAI has worked on a number of projects before the Dota 2 effort. They explored the effect of parameter noise to learning algorithms which has proven to be advantageous across the board. During exploratory behavior used in reinforcement learning, parameter noise is used to increase the efficiency of the rate at which agents learn.

The left diagram represents action space noise, traditionally used to change the likelihood of each action step by step. The right diagram represents the newly implemented parameter space noise:

“Parameter space noise injects randomness directly into the parameters of the agent, altering the types of decisions it makes such that they always fully depend on what the agent currently senses.”

By adding this noise right into the parameters, it has shown that it teaches agents tasks far faster than before. It’s part of a wider effort focusing on new ways to optimize learning algorithms to make the training process not only faster, but also more effective.

They are not done with Dota 2 either. When they come back with their five bot team next year, it will undoubtedly require a level of teamwork never before seen in artificial intelligence. Think of the possibilities. Will this take the shape of a collective hive mind? Will team dynamics among AI look anything like those of their human counterparts? This really is the stuff of science fiction being developed and tested right before our eyes.

Now, Why Might a Billion Dollar AI Startup Be Meddling in a Video Game Competition?

OpenAI is an open source company dedicated to creating safe artificial intelligence and working from a $1 Billion endowment established in 2015. On their website, they state that the Dota 2 experiment was, “a step towards building AI systems which accomplish well-defined goals in messy, complicated situations involving real humans.” The International was proof of concept that they could in fact implement AI that handled random situations successfully — even better than humans.  It leaves us wondering if the next field AI dominates in won’t be something quite as trivial as a video game competition. It is notable that OpenAI’s chairman, Elon Musk, gave a warning statement directly after the victory:

“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”

This is not the first time that Musk has conveyed hesitations towards the upcoming dangers of our superior Dota 2 players. In fact, he has a history as a leading doomsayer:

“With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”

It is clear why someone so worried about the future of AI has devoted his time and resources to a company dedicated to ensuring its safety for humanity with this advancing technology. But it almost seems paradoxical. Teaching an AI to compete better than humans appears to be marching that dreaded outcome one step closer. But at the same time, you can’t temper the advancement of technology by refusing to take part in it. The company’s approach is to make sure everyone can study and use the advancements they are making (the “Open” in OpenAI) and thereby prevent an imbalance of power presented if the best AIs of the future were to be privately controlled by a small number of companies, individuals, and state actors.

Earlier this year, Hackaday’s own Cameron Coward wrote up an in-depth article about the potential future of artificial intelligence. He delves into one of the hotly debated topics with this subject: the ethics of strong AI. Will they be malevolent? What rights should they have? These questions will be answered in the upcoming years — whether we want them to be or not. It is our job to make sure that the answers to these questions in the near future are not answered for us. OpenAI is debugging AI before it debugs us.


Filed under: Current Events, Featured, Interest, news, Original Art, Software Development
17 Oct 19:26

Voici comment 2 barres verticales peuvent donner un surprenant effet 3D aux GIFs

by Victor M.

Depuis quelques temps, une nouvelle tendance émerge sur le web : les GIFs animés à effet 3D. Une illusion qui ne joue qu’avec 2 petites barres verticales pour un effet surprenant !

Depuis quelques années, les GIFs connaissent un regain de popularité sur le web. Ils sont utilisés à toutes les sauces et certains sont devenus de véritables icônes de la culture populaire. On remarque aussi quelques petites innovations dans ce format, notamment celle de créer des GIFs qui simulent un effet 3D. Comment ? Simplement en ajoutant 2 barres verticales, placées à des endroits stratégiques, qui vont donner un effet de profondeur assez impressionnant.

Les personnages, qui apparaissent au départ derrière ces barres blanches franchissent la frontière et créent ainsi ce fameux effet 3D que nous aimons tant. Ces bandes de séparation servent en réalité de marqueurs visuels pour l’arrière plan et le premier plan.

Une fois qu’un élément de l’image passe devant elles (animal, projectile, membre…), votre cerveau identifie automatiquement la scène comme si elle était en trois dimensions. On vous laisse visualiser ci-dessous une série de GIFs qui utilisent avec brio cette technique très astucieuse !

  


Source : mymodernmet.com

Cet article Voici comment 2 barres verticales peuvent donner un surprenant effet 3D aux GIFs provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

16 Oct 07:38

A Grenade Shaped Ice Mold

This is "the original" $10 grenade shaped silicone ice mold. It produces 4.5-inch x 3-inch x 2.5-inch Mk 2 'pineapple' style ice grenades. Cool! [ Amazon link ]..(Read...)

16 Oct 06:13

This 3D Printed Stargate Can Actually Dial! [Video]

by Geeks are Sexy

Unfortunately the alarm and mechanical sound of the Stargate are missing, but this is still awesome.

[Carasibana]

The post This 3D Printed Stargate Can Actually Dial! [Video] appeared first on Geeks are Sexy Technology News.

14 Oct 08:44

Sony prévoit de ressusciter son chien robot Aibo avec une IA

by Johann Breton
Aibo ERS-7. L'équipe Aibo se reforme. C'est en tout cas ce qu'annonce le Nikkei, qui semble toutefois être très sûr de son fait. D'après les sources du quotidien japonais, Sony a récemment fait une croix sur ses ambitions dans le domaine de la robotique industrielle, mais a reporté son attention sur le marché grand public. La direction pense qu'avec...
12 Oct 19:47

Strike Back saison 6 : la nouvelle équipe débarque dès octobre

by Antoine Roche
Strike Back saison 6 : la nouvelle équipe débarque dès octobre
Le retour de la série Strike Back grâce à un nouveau casting arrive bien plus vite que prévu.

Lire la suite

Téléchargez nos chouettes applications mobile pour plus d'infos : ici pour iPhone et ici pour Android


© Rédigé par, Antoine Roche pour Begeek.fr le jeu, 12 Oct 2017 à 14h30
12 Oct 19:44

Google Home now lets you shop at Target with just your voice

by Jacob Kastrenakes

Next time you need an emergency order of LaCroix, you can just ask a Google Home to order a case from Target. Google announced today that Target will begin supporting its Google Express shopping service in the contiguous US. That also means Target will be supported through the Google Assistant’s voice ordering feature, which is so far only live on the Google Home and Android TV but is coming “soon” to iOS and Android as well.

Target has only been available through Google Express in New York City and California before now, so today’s announcement marks a major expansion and helps to fill out Google’s service in a big way. It also follows behind Walmart, which is available nationwide through Google Express and added voice ordering support...

Continue reading…

08 Oct 19:07

Hovering Questions About Magnetic Levitation

by Brian McEvoy

Who doesn’t love magnets? They’re functional, mysterious, and at the heart of nearly every electric motor. They can make objects appear to defy gravity or move on their own. If you’re like us, when you first started grappling with the refrigerator magnets, you tried to make one hover motionlessly over another. We tried to position one magnet over another by pitting their repellent forces against each other but [K&J Magnetics] explains why this will never work and how levitation can be done with electromagnets. (YouTube, embedded below.)

In the video, there is a quick demonstration of their levitation rig and a brief explanation with some handy oscilloscope readings to show what’s happening on the control side. The most valuable part, is the explanation in the article where it walks us through the process, starting with the reason permanent magnets can’t be used which leads into why electromagnets can be successful.

[K&J Magnetics]’s posts about magnets are informative and well-written. They have a rich mix of high-level subjects without diluting them by glossing over the important parts. Of course, as a retailer, they want to sell their magnets but the knowledge they share can be used anywhere, possibly even the magnets you have in your home.

Simpler levitators can be built with a single electromagnet to get you on the fast-track to building your own levitation rig. Remember in the first paragraph when we said ‘nearly’ every electric motor used magnets, piezoelectric motors spin without magnets.


Filed under: toy hacks
06 Oct 19:40

Google Just Revealed How They’ll Build Quantum Computers

by Karla Lant

Next Stop: Quantum Supremacy

Quantum computing: it’s the brass ring in the computing world, giving the ability to exponentially outperform and out-calculate conventional computers. A quantum computer with a mere 50 qubits would outclass the most powerful supercomputers in the world today. Surpassing the limits set by conventional computing, known as achieving quantum supremacy, has been a difficult road. Now, a team of physicists at the University of California Santa Barbara (UCSB) and Google have demonstrated a proof-of-principle for a quantum computer that may mean quantum supremacy is only months away.

Quantum states are difficult to isolate and sustain, so the practical task of isolating quantum processing machinery from outside interference has proved to be the sticking point in pushing quantum supremacy out of reach. However, to demonstrate quantum supremacy, a computer system doesn’t need to be an all-purpose quantum dynamo; it just needs to show one quantum capability that is beyond the capacity of conventional systems.

A diagram of the mockup nine-qubit system the Google team says demonstrates proof of principle for achieving quantum supremacy.
Image Credit: Google

To do that, the Google and UCSB team’s strategy comes down to qubits. Qubits are different from ordinary bits (the smallest unit of data in a computer) because they can exist in superposition. Each ordinary bit can be either a 1 or a 0 at any given time, but a qubit can be both at once. Two ordinary bits have 2² potential positions, but again, only one at a time. Two qubits have that same potential all at once. Adding qubits expands potential exponentially, so 50 qubits represent 10,000,000,000,000,000 numbers — an amount a traditional computer would need a memory on the petabyte-scale to store.

Quantum Supremacy In Action

The team’s plan, then, isn’t to create a fully functional quantum computer, but to instead create a system that can support 49 qubits in superposition reliably. If it can do that, so the theory goes, the rest is relatively easy.

Their system is a series of nine superconducting qubits, consisting of nine metal loops cooled to a low temperature with current flowing through them in both directions simultaneously. They were able to show that the supported qubits represented 512 numbers at once, and that the results were reliable, without an accompanying exponential increase in errors.

This is much lower than the number of qubits needed to declare supremacy, but it’s a promising result. The next step will be to create a 50-qubit chip and test if its errors increase at the manageable pace seen in the nine-qubit experiment. 

The Dawn of the Singularity: A Visual Timeline of Ray Kurzweil’s Predictions
Click to View Full Infographic

If the team is right, they may achieve quantum supremacy in a matter of months. If they do, the applications will be staggering. We can expect to see machine learning take place exponentially faster, and artificial intelligence progress much more rapidly. If it does, we may see the singularity approaching long before most predicted.

Quantum computers will make personalized medicine a reality, parsing out the function of every protein in the human genome and modeling their interactions with all possible complex molecules very quickly. We will see simulation-based climate change solutions come to light, and find new chemistry-driven solutions to carbon capture. We are likely to see huge leaps in material science and engineering that allow us to create better magnets, better superconductors, and much higher energy density batteries. And we are almost certainly going to see more technological advances through biomimetics as we find ourselves achieving more insights into natural processes, such as photosynthesis.

In other words, the idea that quantum supremacy will change everything isn’t just hype.

The post Google Just Revealed How They’ll Build Quantum Computers appeared first on Futurism.

04 Oct 21:45

Google’s bag of tricks includes a surprise — the lifelogging Google Clips camera

by Gannon Burgett

Putting further emphasis on its use of artificial intelligence, Google has launched Google Clips, a new lifelogging camera that uses A.I. to capture and collect life as it happens.

The post Google’s bag of tricks includes a surprise — the lifelogging Google Clips camera appeared first on Digital Trends.