Scientists at Binghamton University in New York developed a breakthrough stretchy, textile-based, bacteria-powered bio-battery which could one day be used to power wearable devices.
The post A stretchable battery, powered by sweat, could revolutionize wearables appeared first on Digital Trends.
There’s no shortage of projects that replace your regular board game dice with an electronic version of them, bringing digital features into the real world. [Jean] however goes the other way around and brings the real world into the digital one with his Bluetooth equipped electronic dice.
These dice are built around a Simblee module that houses the Bluetooth LE stack and antenna along with an ARM Cortex-M0 on a single chip. Adding an accelerometer for side detection and a bunch of LEDs to indicate the detected side, [Jean] put it all on a flex PCB wrapped around the battery, and into a 3D printed case that is just slightly bigger than your standard die.
While they’ll work as simple LED lighted replacement for your regular dice as-is, their biggest value is obviously the added Bluetooth functionality. In his project introduction video placed after the break, [Jean] shows a proof-of-concept game of Yahtzee displaying the thrown dice values on his mobile phone. Taking it further, he also demonstrates scenarios to map special purposes and custom behavior to selected dice and talks about his additional ideas for the future.
After seeing the inside of the die, it seems evident that getting a Bluetooth powered D20 will unfortunately remain a dream for another while — unless, of course, you take this giant one as inspiration for the dimensions.
Filed under: Wireless Hacks
Apple is finalizing a deal to acquire Shazam, the app that lets you identify songs, movies, and TV shows from an audio clip, according to TechCrunch. The deal is reportedly for $400 million, according to Recode, which also confirmed the news.
For Apple, the obvious benefit of acquiring Shazam is the company’s music and sound recognition technologies. It will also save some money on the commissions Apple pays Shazam for sending users to its iTunes Store to buy content, which made up the majority of Shazam’s revenue in 2016, and drove 10 percent of all digital download sales, according to The Wall Street Journal.
A side benefit is if Apple decides to shut down the app, it will hurt competing streaming services like Spotify and Google Play...
Forever Friend of the Gallery, Vangos Pterneas, has some great advice on what to do in a Post-Kinect world.
BREAKING NEWS! Microsoft has officially killed the Kinect.
Today, Alex Kipman (creator of the Kinect) and Matthew Lapsen (XBOX Marketing) announced that Microsoft will stop manufacturing the Kinect sensor. Source: Co.Design
What about your existing customers?
Stopping the production of the sensor means that Kinect will be alive for at least one year. If you have already developed Kinect applications, your customers will be able to use them as-is without any compatibility issues. In terms of software, no changes are required.
Should you dump your current Kinect projects?
No! Kinect for XBOX ONE is not going to end right away. Hardware does not just disappear. Even Kinect for XBOX 360 is still available, 4 years after it was replaced by Kinect v2 and 1 year after it was discontinued.
There are tons of different Kinect projects in a variety of industries:
Thankfully, the developer community is very active. New companies have emerged and we already have a lot of alternatives to the Kinect. Today, I’m going to present my top choices. Keep in mind that I am only presenting the sensors I have used professionally. If you have another suggestion, feel free to write it in the comments below!
My choice: Orbbec
As a business owner and Software Engineer, I have to make a choice that covers the business needs of my clients and customers. Even though OpenPose seems to be the future, Orbbec is, by far, the most reliable option right now. Their team has the know-how to deliver exceptional products and services. Orbbec has both the hardware and the software to replace Kinect.
Disclaimer: Josh Blake, co-founder of Orbbec, was the man who nominated me as a Microsoft MVP. I know he’s been doing great work with Orbbec and I would like to see Orbbec taking on Kinect’s market share.
‘Til the next time… keep Kinecting
Project Information URL: https://pterneas.com/2017/10/25/kinect-dead/
In the 15 years since the human genome was first sequenced in a historic scientific achievement, genomic sequencing has become relatively routine, with huge genomes being sequenced at incredible speeds. However, sorting through nucleotides and making educated guesses about their use can only get us so far. On December 4, Google released a tool that may help: DeepVariant, which utilizes artificial intelligence (AI) techniques and machine learning to more accurately build a picture of a person’s genome from sequencing data.
Machine learning is an application of AI that allows systems to improve without external programming or interference. By automatically identifying small insertion and deletion mutations and single base pair mutations, identified by a rapid method of genetic analysis known as high-throughput sequencing, Google’s new AI can reportedly create an accurate picture of a full genome with little effort.
Brad Chapman, a research scientist at Harvard’s School of Public Health who tested an early version of DeepVariant, told MIT Technology Review that one of the difficulties in other sequencing programs lies “in difficult parts of the genome, where each of the [tools] has strengths and weaknesses. These difficult regions are increasingly important for clinical sequencing, and it’s important to have multiple methods.”
In the early 2000’s, when genome sequencing became widely available for the first time, scientists lacked the ability to interpret the data being collected. DNA could be sequenced, but analysis of these large datasets led to inaccurate and incomplete genome pictures.
Since then, technologies and techniques have continued to improve. Google’s advanced analysis capability reportedly goes even further beyond what has before been capable. Existing sequence-interpreting tools typically identify mutations by ruling out read errors, but DeepVariant’s method is said to paint a more accurate picture.
To avoid the errors produced by other methods of high-throughput sequencing, the Google Brain team that developed DeepVariant fed their deep-learning system data from millions of high-throughput sequences as well as fully sequences genomes. They then continued to adjust their model until the system could interpret sequenced data with high accuracy.
Brendan Frey, CEO of AI health software company Deep Genomics, told Tech Review that, “The success of DeepVariant is important because it demonstrates that in genomics, deep learning can be used to automatically train systems that perform better than complicated hand-engineered systems.”
Even greater importance of such a tool may lie in its applications. A variety of diseases, ranging from cancers to diabetes to heart disease, are known to be genetically linked.
Medical professionals already take family history into account when diagnosing a condition; if they one day had access to your sequenced genome, analyzed by an AI capable of running through it quickly and accurately, they might be able to more accurately provide you with information about yourself and what you are at risk of.
A doctor could also more accurately prescribe treatment for the diseases that you already have — which is especially relevant in diseases like cancer.
This development is yet another step towards a future in which medicine is truly personal, and each patient is treated with such variations in mind.
Étudiante à Sup de Pub, Marion Roby a eu l’idée de construire un CV créatif qui prend la forme d’une figurine Funko Pop à son effigie.
Si la pop culture n’a aucun secret pour vous, alors vous connaissez sûrement les Funko Pop, ces personnages qui rendent hommage aux différents protagonistes de séries, de films, de jeux vidéo, de comics ou encore de mangas. Pour ceux qui ne connaissant pas, les figurines Funko Pop arborent un design atypique, avec une tête surdimensionnée par rapport à leur corps et des yeux globuleux. Et si nous vous parlons de tout ça, c’est parce que nous avons reçu un CV créatif qui vaut sérieusement le détour…
Marion Roby est directrice artistique freelance et étudiante à Sup de Pub. À la recherche d’une nouvelle expérience professionnelle, cette jeune créative a eu l’idée de construire un CV qui prend la forme d’une figurine Funko Pop. Elle a donc personnalisé un personnage qui reproduit son apparence et a confectionné un packaging avec son nom, son âge, le poste recherché et ses expériences évidemment au verso !
Une idée originale et créative qui prouve, une fois de plus, que tout n’a pas été testé et qu’on peut encore innover en matière de CVs créatifs. Et si le sujet vous intéresse, on vous invite à (re)découvrir l’idée de ces deux étudiants belges qui ont créé un morceau de rap en guise de CV.
Crédits : Marion Roby
Crédits : Marion Roby
Crédits : Marion Roby
Crédits : Marion Roby
Crédits : Marion Roby
Crédits : Marion Roby
Imaginé par : Marion Roby
Cet article Elle personnalise une figurine Funko Pop pour en faire un CV créatif provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.
Google is offering a new way for Raspberry Pi tinkerers to use its AI tools. It just announced the AIY Vision Kit, which includes a new circuit board and computer vision software that buyers can pair with their own Raspberry Pi computer and camera. (There’s also a cute cardboard box included, along with some supplementary accessories.) The kit costs $44.99 and will ship through Micro Center on December 31st.
The AIY Vision Kit’s software includes three neural network models: one that recognizes a thousand common objects; one that recognizes faces and expressions; and a “a person, cat and dog detector.” Users can train their own models with Google’s TensorFlow machine learning software.
Google touts this as a cheap and simple computer...
Lors de sa conférence AWS re:Invent, Amazon a présenté ses solutions pour démocratiser l’Intelligence Artificielle et le Machine Learning auprès des entreprises. Au coeur du projet, une caméra baptisée la DeepLens, une plateforme de Machine Learning appelée SageMaker et des services pour la traduction et la transcription grâce à l’Intelligence Artificielle. Le grand public connaît […]
Some people look forward to the day when robots have taken over all our jobs and given us an economy where we can while our days away on leisure activities. But if your idea of play is drone racing, you may be out of luck if this AI pilot for high-speed racing drones has anything to say about it.
NASA’s Jet Propulsion Lab has been working for the past two years to develop the algorithms needed to let high-performance UAVs navigate typical drone racing obstacles, and from the look of the tests in the video below, they’ve made a lot of progress. The system is vision based, with the AI drones equipped with wide-field cameras looking both forward and down. The indoor test course has seemingly random floor tiles scattered around, which we guess provide some kind of waypoints for the drones. A previous video details a little about the architecture, and it seems the drones are doing the computer vision on-board, which we find pretty impressive.
Despite the program being bankrolled by Google, we’re sure no evil will come of this, and that we’ll be in no danger of being chased down by swarms of high-speed flying killbots anytime soon. For now we can take solace in the fact that JPL’s algorithms still can’t beat an elite human pilot like [Ken Loo], who bested the bots overall. But alarmingly, the human did no better than the bots on his first lap, which suggests that once the AI gets a little creativity and intuition like that needed to best a Go champion, [Ken] might need to find another line of work.
Thanks for the heads up, [Caroline].
Filed under: drone hacks
Federico Ciccarese's €1800, 3D-printed Doublehand from Youbionic is a glove with 2 robotic hands that can be controlled with 2 fingers each...(Read...)
Long before things “went viral” there was always a few “must have” toys each year that were in high demand. Cabbage Patch Kids, Transformers, or Teddy Ruxpin would cause virtual hysteria in parents trying to score a toy for a holiday gift. In 1998, that toy was a Furby — a sort of talking robot pet. You can still buy Furby, and as you might expect a modern one — a Furby Connect — is Internet-enabled and much smarter than previous versions. While the Furby has always been a target for good hacking, anything Internet-enabled can be a target for malicious hacking, as well. [Context Information Security] decided to see if they could take control of your kid’s robotic pet.
Thet Furby Connect’s path to the Internet is via BLE to a companion phone device. The phone, in turn, talks back to Hasbro’s (the toy’s maker) Amazon Web Service servers. The company sends out new songs, games, and dances. Because BLE is slow, the transfers occur in the background during normal toy operation.
Looking at BLE services, there was a common DFU service for uploading firmware and an interface for sending proprietary DLC files. They found an existing project that could send existing DLC files to the device and even replace audio in those files. However, the format of the DLC files appeared to be unknown outside of Hasbro.
Attacking the DLC files with a hex editor, some of it seemed pretty obvious. However, some of it was quite elusive. The post has a great blow-by-blow detail of the investigation and, as you can see in the video below, they were successful.
Hasbro didn’t seem too concerned about the security ramifications because an attacker would have to have proximity to the toy. It isn’t hard to think of cases where that’s not a great excuse, but we suppose it does cover the most common things you’d worry about.
Filed under: Toy Hacks
Si vous vous intéressez de près aux assistants vocaux, le Black Friday met en avant le Google Home Mini chez plusieurs marchands. Si vous dépensez plus de 300€ chez Darty, Boulanger ou la Fnac, ils vous offrent un Home Mini (valeur 59€). Le Google Home Mini est une version miniature du Home. Pour tout savoir […]
The booze is strong in this one! This Stormtrooper glass decanter is modelled on the iconic helmet from the 1977 movie, Star Wars: A New Hope.
When they’re not being drummed on by a bunch of scruffy little Ewoks, Stormtrooper helmets make pretty classy drinks decanters.
Based on the iconic helmet moulded by Andrew Ainsworth at Shepperton Design Studios for the original 1977 film – this galactic carafe is made from premium ‘Super Flint Glass’, holds 750ml of delicious booze and is set to stun your party guests.
Find a comfy chair, whack on the Cantina band music and pour yourself a nice shot of Absith, or some cheap Imperial Vodka.
Le duo d’artistes français Differantly (DFT) réalise un art surprenant : celui de dessiner avec un seul trait toutes sortes d’animaux et d’objets. Du génie !
Si comme nous, vous êtes en extase devant ces gens qui savent dessiner tout et n’importe quoi, alors vous allez adorer le talent de ce duo d’artistes baptisé Differantly (ou DFT). Leur art ? Dessiner avec une seule et unique ligne ! Sans jamais lever le crayon de la feuille de papier, ils arrivent à représenter toutes sortes d’animaux et d’objets.
Le plus amusant dans cet art insolite, c’est qu’au début on ne comprend pas forcément où les artistes veulent en venir en faisant sauter ou en tournant le stylo dans tous les sens. Puis la forme se dessine et notre cerveau fait les connexions pour déclencher l’étincelle qui donne tout son sens à l’oeuvre. Véritablement fans de leur gimmick créative, on a réalisé une petite vidéo sur notre page Facebook :
Vivant entre Paris et Berlin, Emma et Stéphane sont les deux artistes derrière ces dessins fascinants. Ils ont réussi à trouver leur différence (on se dit alors que Differantly n’a pas été choisi au hasard) et possèdent aujourd’hui une belle communauté de fans sur les réseaux sociaux, notamment sur leur compte Instagram suivi par plus de 200 000 personnes.
Les artistes collaborent avec de grandes marques, comme Adobe, Verizon, Adidas ou encore Nissan, notamment pour la mise en place d’opérations marketing artistiques. Pour en savoir plus sur ce duo talentueux, rendez-vous sur leur site internet.
Crédits : DFT
Cet article Differantly : ils dessinent avec un seul trait et à ce niveau c’est du génie provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.
Google has announced a bunch of new updates coming to Assistant today that should make it possible for developers to make more functional applications that better integrate with your Google Assistant devices.
One of the biggest additions is support for new languages. Developers will now be able to write apps in Spanish, Italian, Portuguese, and Indian English.
Another major update is the ability for developers to create applications that take advantage of having both a Google Home and a phone with Assistant, allowing Home devices to hand off requests to smartphones for completion of actions (like, say, paying for a sandwich you ordered on your Home). Google will also allow apps to recognize implicit requests, so that you...
The Elder Scrolls V: Skyrim VR is right around the corner with a release date of November 17th, 2017 — that’s only mere days away. With its launch this Friday it will easily rank as the largest, most detailed, and most elaborate game available in VR to date. With hundreds of hours of gameplay between the core campaign, side content, and three expansions players will be able to lose themselves once again in the frosted wastelands of Skyrim.
When Skyrim originally released all the way back in 2011 it was lauded as a landmark achievement for roleplaying games and its impact on the industry is still being felt to this day. Bethesda teamed up with Escalation Studios to port the title over to VR and rebuild many of the controls and interfacing options from the ground up. On PSVR you can play either with a Dualshock 4 gamepad or two PS Move controllers with full head-tracking and a host of movement options. Casting spells means controlling each hand individually, swinging the sword with your hand, and blocking attacks with a shield strapped to your arm. It’s Skyrim like you’ve never experienced it before.
We also got the chance to send over a handful of questions to the Lead Producer at Bethesda Game Studios, Andrew Scharf, before the game’s launch to learn more about its development and what it took to bring the vast world of Tamriel to VR for the very first time.
UploadVR: All of Skyrim is in VR, which is quite ambitious. What were the biggest challenges with porting the game to a new format like VR?
Andrew Scharf: PlayStation VR games need to be running at 60 fps at all times, otherwise it can be an uncomfortable experience for the player. We’re working with a great team at Escalation Studios, who are among the best VR developers in the industry and with their help, we were able to not only get the game running smoothly, but redesign and shape Skyrim’s mechanics to feel good in VR.
It’s definitely a challenge figuring out the best way to display important information in VR. Take the World Map and the Skills menu for example — we wanted to give the player an immersive 360 degree view when choosing where to travel, or viewing the constellations and deciding which perk to enable. For the HUD, we needed to make sure important information was in an area where the player could quickly refer to it, while also preventing the player from feeling claustrophobic by being completely surrounded by user interface elements.
UploadVR: What are the major differences with developing for PSVR and HTC Vive?
Andrew Scharf: The major difference that took iteration is the single camera that the PSVR uses, versus the HTC Vive base stations. In order to ensure an optimal VR experience, the PS Move Controllers need to be in view of the camera which means players need to always be facing in that direction. Early on, we found that this was a bit of a challenge – players would put on the headset and then turn all the way around and start going in a random direction. One solution to help keep players facing the right way was to anchor important UI elements so if you can see the compass in front of you, you’re facing in the right direction.
The PlayStation Move Controllers also has several buttons while the HTC Vive Controllers primarily have a multi-function trackpad, so figuring out input and control schemes that felt natural to the platform was tricky, but we feel really good where we’ve landed with controls and locomotion for both Skyrim and Fallout VR.
UploadVR: What are some of the ways that you think VR adds to the game? For example, in my last demo deflecting arrows with my shield felt really, really satisfying. How has VR redefined how you enjoy Skyrim?
Andrew Scharf: It’s a huge perspective shift which completely changes how you approach playing the game. Part of the fun of making combat feel natural in VR is now you have some tricks up your sleeve that you didn’t have before. You can fire the bow and arrow as fast and you’re able to nock and release, lean around corners to peek at targets, shield bash with your left hand while swinging a weapon with your right, and my favorite is being able to attack two targets at the same time with weapons or spells equipped in each hand.
You can also precisely control where you pick up and drop objects, and in general be able to interact with the world a little more how you’d expect – so like me, you’re probably going to take this skill and use it to obsessively organize your house.
UploadVR: Were there ever considerations to add voice recognition for talking with NPCs? Selecting floating words as dialog choices breaks the immersion a little bit.
Andrew Scharf: We thought about a lot of different options for new features and interfaces during development. In the end we chose to prioritize making the game feel great in VR. At this time, voice recognition for dialogue is not included in the game.
UploadVR: Was smooth locomotion with PS Move something that was always planned for the game, or was it added after feedback from fans at E3?
Andrew Scharf: There were a bunch of options we were considering from the very beginning of development. We wanted to ensure that people who were susceptible to VR motion sickness could still experience the world of Skyrim comfortably, so we focused on new systems we would have to add (like our teleportation movement scheme) to help alleviate any tolerance issues first.
For smooth locomotion, there’s a good number of us here who spend a lot of time playing VR games and see what works well and what doesn’t, but ultimately it came down to figuring out the best approach for us. There were unique challenges with Skyrim that we had to iterate on, from having both main and offhand weapons, the design of the PlayStation Move controllers, long-term play comfort, and ultimately, making sure you can still play Skyrim in whichever playstyle you prefer.
UploadVR: Mods won’t be in the game at launch, but what about the future? Do you want to bring other Elder Scrolls games into VR? And what about Fallout 4 VR on PSVR?
Andrew Scharf: We’ve definitely learned a lot, but as far as what future features or titles we will or won’t bring to VR, that remains to be seen. For now our focus is on launching and supporting the VR versions of Skyrim and Fallout 4.
Our goal with all our VR titles is to bring it to as many platforms as possible. When and if we have more information to share we will let everyone know.
During a short “Making Of” video, Bethesda revealed more details about Skyrim VR’s development and some of the updates they made to bring it to life once again:
We’ll have more details on Skyrim VR very soon — including a full review and livestream tomorrow. You can also read more about Bethesda’s approach to VR in this interview with the company’s VP of Marketing, Pete Hines. Skyrim VR releases for PSVR this Friday, November 17th. For more impressions you can read about our latest hands-on demo and watch actual gameplay footage. Let us know what you think — and any questions you might have — down in the comments below!
It is designed from scratch to be:
- Lightweight Enables inference of on-device machine learning models with a small binary size and fast initialization/startup
- Cross-platform A runtime designed to run on many different platforms, starting with Android and iOS
- Fast Optimized for mobile devices, including dramatically improved model loading times, and supporting hardware acceleration
TensorFlow Lite falls back to optimized CPU execution when accelerator hardware is not available, which ensures your models can still run fast on a large set of devices.
ArchitectureThe following diagram shows the architectural design of TensorFlow Lite:
- TensorFlow Model: A trained TensorFlow model saved on disk.
- TensorFlow Lite Converter: A program that converts the model to the TensorFlow Lite file format.
- TensorFlow Lite Model File: A model file format based on FlatBuffers, that has been optimized for maximum speed and minimum size.
- Java API: A convenience wrapper around the C++ API on Android
- C++ API: Loads the TensorFlow Lite Model File and invokes the Interpreter. The same library is available on both Android and iOS
- Interpreter: Executes the model using a set of operators. The interpreter supports selective operator loading; without operators it is only 70KB, and 300KB with all the operators loaded. This is a significant reduction from the 1.5M required by TensorFlow Mobile (with a normal set of operators).
- On select Android devices, the Interpreter will use the Android Neural Networks API for hardware acceleration, or default to CPU execution if none are available.
ModelsTensorFlow Lite already has support for a number of models that have been trained and optimized for mobile:
- MobileNet: A class of vision models able to identify across 1000 different object classes, specifically designed for efficient execution on mobile and embedded devices
- Inception v3: An image recognition model, similar in functionality to MobileNet, that offers higher accuracy but also has a larger size
- Smart Reply: An on-device conversational model that provides one-touch replies to incoming conversational chat messages. First-party and third-party messaging apps use this feature on Android Wear.
What About TensorFlow Mobile?As you may know, TensorFlow already supports mobile and embedded deployment of models through the TensorFlow Mobile API. Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices. With this announcement, TensorFlow Lite is made available as a developer preview, and TensorFlow Mobile is still there to support production apps.
The scope of TensorFlow Lite is large and still under active development. With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models. We plan to prioritize future functional expansion based on the needs of our users. The goals for our continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices.
We are excited that developers are getting their hands on TensorFlow Lite. We plan to support and address our external community with the same intensity as the rest of the TensorFlow project. We can't wait to see what you can do with TensorFlow Lite.
For more information, check out the TensorFlow Lite documentation pages.
Stay tuned for more updates.
Happy TensorFlow Lite coding!
Robot maker Boston Dynamics, now owned by Japanese telecom and tech giant SoftBank, just published a short YouTube clip featuring a new, more advanced version of its SpotMini robot. SpotMini, first unveiled in June 2016, started out as a giraffe-looking chore bot that was pretty terrible at performing tasks around the house, and, in one short clip, hilariously ate it on a cluster of banana peels like a character straight out of a slapstick cartoon.
The new SpotMini looks much more polished and less grotesque, like a real-life cross between a Pixar animation and a robot out of a Neill Blomkamp vision of the future, thanks in part to series of bright yellow plates covering its legs and body. The new bot’s movement also looks incredibly...
La japonaise En reproduit dans la vraie vie tous les plats et sucreries mis en scènes dans l’univers des studios Ghibli. Cette instagrameuse, suivie par plus de 80 000 personnes, est une cuisinière bien réelle qui puise son inspiration dans les fameux dessins animés japonais au succès mondial, du Chateau Dans Le Ciel à Mon Voisin Totoro. Une façon aussi ludique qu’alléchante de donner vie aux classiques.
Watch Simon's Cat's Guide To Birthdays!..(Read...)
Microsoft acquired LinkedIn for $26 billion last year, promising to closely link the service with its Office suite of applications. While we’ve seen a new Windows 10 app for LinkedIn, Microsoft is unveiling an even more useful addition for its service: Resume Assistant. Office 365 subscribers will now get direct LinkedIn integration when they’re building a resume in Word.
The assistant works by picking out job descriptions in an existing resume and finding similar public examples on LinkedIn to help job seekers curate a better description. While you could simply copy the descriptions, Microsoft is only surfacing them in a side section in Word and not allowing users to simply drag and drop them into documents.
An officially licensed Back to the Future Mr. Fusion Car Charger by the folks from Thinkgeek. This thing does flip open like in the movie, but please, do not throw garbage inside!
The Back to the Future Mr. Fusion Car Charger generates enough power to charge your devices via your 12V vehicle power adapter (cigarette lighter). You won’t be required to load it up with garbage, luckily. The trade off from that inconvenience is that you can’t time travel, though. Maybe in the next product run we can add that in.
Can't fall to sleep anywhere? With the Silentmode, you'll be able to sleep just about everywhere. The new audio mask promises "100 percent blackout" so you can completely tune out the world, both visually and aurally.
The post Having trouble sleeping? Take a power nap anywhere with Silentmode audio mask appeared first on Digital Trends.
Protect your valuables by hiding them in plain sight inside of this concealed compartment shelf. It features an ample 23.5 x 3 x 7.5 inch compartment that easily locks and unlocks using the included RFID key and token – no combinations necessary.