Shared posts

24 Mar 13:33

Finley AI, l'assistant financier vocal

by Patrice BERNARD
Finley AI
Parce que beaucoup d'outils d'accompagnement des consommateurs dans la gestion de leur argent sont uniquement conçus pour répondre à leurs besoins à court terme (épargne de précaution, réduction de l'endettement…) et parce que leur éducation financière est souvent déficiente, Finley AI a conçu un assistant pédagogique interactif.

De fait, on assiste actuellement à une multiplication, y compris dans les banques traditionnelles, des services destinés à faciliter et stimuler la constitution d'une épargne, que ce soit pour faire face à des dépenses imprévues ou pour réaliser un projet (achat, voyage, cadeau…). À l'inverse, les solutions qui visent des échéances plus lointaines, telles que les robo-advisors, tendent à négliger le besoin de former leurs clients à leurs produits et d'offrir un conseil éclairé sur les options qu'ils mettent à leur disposition.

Malheureusement, une immense majorité de personnes – 90% au Royaume-Uni, selon l'article de FinTech Future… et probablement encore plus en France – n'ont jamais bénéficié du moindre accompagnement financier dans leur vie. En conséquence, beaucoup ignorent tout des opportunités qui leur tendent les bras ou bien choisissent des approches qui ne sont pas nécessairement adaptées à leurs besoins et leur situation mais leur sont poussées par un vendeur principalement intéressé par ses commissions.

Afin de remédier à une telle dérive, Finley AI propose donc une plate-forme conviviale et simple d'accès, intégrée à l'écosystème de l'assistant vocal Google (donc disponible via mobile, web, enceinte connectée…), qui se charge de répondre à toutes les questions qu'une personne avisée poserait normalement à son conseiller attitré, si elle pouvait en consulter un : comment fonctionnent les pensions, comment planifier son avenir, quelle est la meilleure manière de préparer sa retraite, par qui se faire aider… ?

Finley AI en action

Naturellement, l'impact de Finley AI sur la relation des individus à l'argent et aux produits financiers risque d'être réduit. En effet, sa logique fondée exclusivement sur la diffusion de connaissances et de recommandations génériques, sans prendre en compte le contexte spécifique de chaque utilisateur et sans chercher à rendre les conseils prodigués directement « actionnables » (à l'exception de la recherche d'un professionnel, qui est aussi sa seule perspective de modèle économique), limite son potentiel d'efficacité.

Quoi qu'il en soit, l'initiative de Finley AI a l'immense mérite de mettre le doigt sur le problème critique de la qualité de l'accompagnement des consommateurs dans leur vie financière : quand les fournisseurs – acteurs historiques ou startups – se focalisent sur leurs offres, au détriment de l'exploration des besoins précis de leurs clients et de la démonstration transparente de la valeur qui leur est apportée en alignement avec ceux-ci, la méfiance l'emporte, pour le plus grand malheur de toutes les parties prenantes.
14 Mar 07:32

“Patients casse-couilles” : le livre qui rassemble les perles entendues aux urgences

by Victor M.

Si on associe généralement les urgences d’un hôpital à un lieu effrayant et anxiogène, vous allez voir aujourd’hui qu’elles peuvent aussi être la source de bonnes doses de fous rires. Sonia Camay, médecin urgentiste depuis maintenant 12 ans, a vu défiler un nombre incalculable de patients. Et si le rôle du médecin urgentiste est de prendre en charge le patient pour préserver sa santé, il doit aussi savoir l’écouter et discuter avec lui… Ce qui donne parfois lieu à des discussions plutôt insolites.

Sonia Camay a donc eu l’idée d’écrire un livre référençant ses meilleures anecdotes hilarantes et les fameuses perles qu’elle a pu entendre de la bouche des patients qui ont croisé son chemin. Baptisé “Patients casse-couilles”, ce recueil de phrases et d’anecdotes insolites entendues dans les couloirs de l’hôpital nous montre une facette totalement différente des urgences.

Crédits : Sonia Camay / Les Éditions de l’Opportun

Entre les patients qui se trompent de mots, ceux qui visiblement ne connaissent pas très bien leurs corps, les enfants tyranniques ou encore les parents un peu trop portés sur la chose, les différents thèmes abordés avec beaucoup de légèreté montrent que le quotidien de médecin urgentiste peut parfois donner lieu à des situations absolument improbables. On imagine déjà la réaction de Sonia Camay et de ses collègues qui ont dû répondre, avec sérieux et professionnalisme, à ces perles de patients à mourir de rire.

Commander le livre

L’équipe de Creapills a sélectionné pour vous 15 perles insolites de patients, présentes dans le livre. Le recueil “Patients casse-couilles” sera disponible à partir du 21 mars prochain pour 9,90€, mais vous pouvez déjà le pré-commander ici. Une bonne idée qui aborde le sujet anxiogène des urgences en dédramatisant la situation à travers ces expériences vécues par une médecin urgentiste dont les journées ne sont visiblement pas de tout repos !

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Crédits : Patients casse-couilles

Imaginé par : Sonia Camay pour Les Éditions de l’Opportun
Source : franceinter.fr

L’article “Patients casse-couilles” : le livre qui rassemble les perles entendues aux urgences est apparu en premier sur Creapills.

14 Mar 07:28

Comme le bon vin

by CommitStrip

14 Mar 07:23

Industrial E-Stop Button Becomes Novel Computer Interface

by Jeremy S. Cook

When typing away at your computer, there are likely times where a few keystrokes just don’t cut it, and a big red emergency button would be more appropriate. This, you might assume, is the type of thing that only appears in a cartoon, perhaps featuring an elaborate trap to catch a very fast bird.

This assumption, however, would now be wrong as Glen Atkins has integrated an HID-capable PIC16F1459 chip inside of an industrial mushroom push button switch. Instead of stopping a heavy assembly cell until reset, his board/software now either inserts a poop emoji into a Microsoft document, or executes a sequence that locks his Linux box at work. Both make sense in their own context, either when firing off an angry email, or clocking out for the day with a final button press.

Electronics-wise, the device is nearly identical to his Single ESC Key USB Keyboard from May 2018, but this build takes things to a whole new level. Rather than merely wire an input from the button’s internals and leave things loose — or glued — inside, he designed a 3D-printed holder based on the manufacturer’s 3D model to nicely mount his custom circuit board. This produced a secure assembly that looks internally almost like it was meant to be there. Notably, per its new USB connection, the unit is no longer appropriate for actual safety-critical applications.


Industrial E-Stop Button Becomes Novel Computer Interface was originally published in Hackster Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

06 Mar 07:04

How Augmented Reality Soothes Kids Scrapes and Cuts

by Derek E. Baird

Japanese toymaker Bandai has developed a digital spin on mom or dad kissing your boo-boo to make it all better: augmented reality band-aids.

Targeted at kids aged 3–6, these digital band-aids are designed to soothe a child with some consoling words from their favorite animated characters.

Right now, this augmented reality technology is limited to the Japanese kids’ market and characters, but if it catches on, don’t be surprised if American characters like Mickey Mouse, Sponge Bob or My Pretty Pony don’t end up on your kid's knee telling them that everything is going to be okay.

Still, it’s interesting to see the many ways that smartphones and AR technologies are beginning to integrate themselves into our daily, and most basic, tasks.

And while it would be pretty cool to see Luke Skywalker pop up on your hand, nothing is as satisfying to a child as a kiss on the forehead and squeeze from their parent.

Related


How Augmented Reality Soothes Kids Scrapes and Cuts was originally published in Virtual Reality Pop on Medium, where people are continuing the conversation by highlighting and responding to this story.

05 Mar 21:59

En Chine, Starbucks a lancé une tasse à café insolite en forme de patte de chat

by Claire L.

Mais jusqu’où ira la folie et la passion pour les chats ? Les chats sont extrêmement populaires, notamment sur le web, et ça, les marques l’ont bien compris. Elles sont nombreuses à jouer sur cet attrait des chats pour proposer à leurs clients des produits insolites qui sauront satisfaire leur passion pour ces adorables boules de poils.

La dernière marque à avoir joué sur la corde sensible est le géant du café Starbucks qui propose à ses clients en Chine une tasse qui prend la forme d’une patte de chat. Dans le cadre de sa collection “Printemps 2019” sur le thème des fleurs de cerisier, la marque a a proposé à ses clients cette fameuse tasse “Cat Paw” qui prend la forme d’une tasse classique dans laquelle une patte en verre a été sculptée.

Lorsque vous versez votre boisson dans la tasse, ce n’est pas l’intégralité du récipient qui se remplit mais la fameuse patte qui est du coup mise en évidence. Mais ce que Starbucks n’avait visiblement pas prévu, c’est l’attrait du public chinois pour sa nouvelle tasse “patte de chat”

Dans une vidéo partagée sur YouTube par la CGTN (à visionner ci-dessus), on peut voir des clients se bousculer et littéralement se battre pour se procurer l’une de ces tasses en édition limitée. 😓

Initialement commercialisée 199 yuans (environ 26 euros), cette tasse est aujourd’hui proposée sur des sites tiers pour une valeur pouvant atteindre 10 fois ce montant. Si l’initiative était amusante et originale pour la marque, les conséquences et les bagarres engendrées par le côté “exclusif” du produit ont hélas tendance à ternir son image…

Crédits : Starbucks

Crédits : Starbucks

Crédits : __happy_day__

Crédits : __happy_day__

Crédits : __happy_day__

Crédits : koaphan

Imaginé par : Starbucks China
Source : designtaxi.com

L?article En Chine, Starbucks a lancé une tasse à café insolite en forme de patte de chat est apparu en premier sur Creapills.

27 Feb 20:51

How Amazon took 50% of the e-commerce market and what it means for the rest of us

by Jonathan Shieber
Jun-Sheng Li Contributor
Jun-Sheng (Jun) Li is an Executive in Residence at Canvas Ventures and was the former senior vice president of Walmart's global e-commerce supply chain.

As SVP of Walmart’s global ecommerce supply chain for five years until 2018, I had a front row seat to how brick-and-mortar retailers were responding to Amazon’s dominance in e-commerce. Most of us were alarmed. And who could blame us? Today, Amazon has nearly50% of all e-commerce trade.

The way I see it, if you are a brick-and-mortar retailer, you either embrace a digital strategy to become omnichannel or do nothing and become irrelevant.  To fully appreciate the gravity of the situation, let’s step back to understand how we got here. And importantly, start with what I believe is the single, biggest challenge for retailers today.

Holy Grail: Become Truly Omnichannel

Omnichannel retailing has become the goal that every retailer is aiming for — but few know how to achieve. In a nutshell, omnichannel simply means providing customers a seamless, continuous experience wherever customers would like to shop – across any device or store location – with a unified brand experience.

For example, I can buy a pair of shoes from Nordstrom using my smartphone and choose to pick up my purchase at a store or have it delivered to my home. If I want to return the shoes for any reason, I can do so by mail or return them at a store. My interaction with Nordstrom consistently flows from one channel to another.

But from a brick-and-mortar retailer’s perspective, that’s easier said than done.

A Lot More Moving Parts

They say the “devil’s in the details” and I would add the “details are in the supply chain.” And today’s supply chain is more complex than ever – especially if you’re a traditional, brick-and-mortar retailer striving to transform into an omnichannel. To start, you have to get your head around doing things very differently. You will be:

  • Distributing products to millions of homes instead of hundreds of stores
  • Managing millions of SKUs (stock keeping units) instead of thousands
  • Shipping to homes in parcels (including last-mile delivery) instead of truckloads to stores
  • Running fulfillment centers (FCs) in addition to distribution centers (DCs). FCs ship goods directly to customers. DCs ship goods to stores.

Want to Be an Omnichannel?

Be Ready to Add “Fulfillment Centers” to Your Existing Mix of “Distribution Centers”. The level of complexity will increase by orders of magnitude.

Three Key Challenges to Omnichannel Excellence

These are the top-three most intransigent challenges you will face in your omnichannel quest:

  • Organizational and Management Constraints
  • People can be resistant to change. Many find it hard to think in new paradigms.
  • Different business units have different processes, KPIs (key performance indicators) and incentives.

Sharing of assets across all channels can be difficult. For example, how should you allocate warehouse space and balance the availability of products (i.e. inventory) between online and in-store sales?

Process and Systems Challenges

  • First you need to plan: you must aggregate demand forecasting and planning for both physical and online sales by channel.
  • Then figure out what you have: Determine product assortment across all channels: DCs, FCs, your own stores and even third-party locations like a marketplace vendor.
  • Lastly, you need to know where to ship your products from. You must instantaneously track what was sold against a global inventory spread across a myriad of locations.

Continuous Innovation

A brick-and-mortar retailer will need to continually learn new processes and technologies that impact your supply chain. For example:

  • Learn new processes when integrating FCs into your supply chain network. This includes new ways to receive, sort, store, pick, pack, ship, house products in lockers or stores for drive-through and pick-ups. These processes are completely different from what is used at traditional DCs or stores.
  • Keep abreast of packaging technology, both the method of packing (optimizing how much you can fit into a package) and the materials (consider what’s best for long distance, the environment, costs, and the protection of the product, especially if it involves home delivery of groceries with thermal foam or totes.)
  • Meet the demands of home grocery shopping and “last mile” deliveries. In addition to delivering goods in full truckloads from DCs to stores, you must learn how to operate so-called “milk runs” from stores to customer homes. When delivering groceries to a home, you must adhere to certain time slots and sometimes make “live deliveries” to ensure perishable goods are received promptly and safely. This entails a constantly refreshed and technologically modern TMS (Transportation Management System).

 Amazon Had a Wide, Open Field

Going back to the headline of this article, how did Amazon become the e-commerce behemoth that it is today with seemingly little resistance from traditional retailers? Was the brick-and-mortar executives asleep at the wheel? To answer that question, some historical framing could help: 

The Four Waves of Ecommerce

So What’s a Retailer to Do?

I think we’re at the point of no return. The omnichannel train has left the station.  What would I do if I ran a retail business today? First, I would accept the fact that customers now love to shop both online and offline, and they expect 2-day shipping for certain products and near flawless execution. The bar has been set high by Amazon. Then I would create a game plan that leverages my existing physical assets like warehouses, distribution centers, and stores to offer new services like ship-from-store or pickup-at-store. I would also build new fulfillment centers specifically to fulfill online orders and ship to customers’ homes.

Although Amazon dominates e-commerce, there are multitudes of department stores and retail brands with successful digital platforms. I was on the Walmart team from 2013 to 2018 when Walmart invested heavily in their omnichannel strategy.

On February 19, 2019, Walmart announced their FY 2019 Q4 results which showed the company grew e-commerce sales by 43 percent year-over-year in its last quarter, blowing past estimates for the holiday season.

Of course, many factors go into an effective omnichannel strategy. The biggest factor, in my mind, is simply to gather the corporate will and get started.

26 Feb 07:05

HoloLens 2’s Field of View Revealed

by David Heaney
microsoft hololens 2

At MWC yesterday Microsoft announced the $3500 HoloLens 2 augmented reality headset. On stage the company boasted the headset’s “more than 2x” field of view compared to the original. However, no specific values were given.

Today on Twitter Microsoft’s Alex Kipman clarified the details. The headset provides 52 degrees of augmented viewing when measured diagonally, according to Kipman. Given that the Microsoft website states the headset’s aspect ratio is 3:2, this would give a horizontal FoV of 43° and a vertical of 29° using the basic Pythagorean theorem.

That 43°×29° is an impressive increase over the 30°×17.5° of the original. It’s now roughly equal to the 40°×30° of the $2295 Magic Leap One.

But how is this “more than 2x” the field of view of the original, you might ask? Well it seems Microsoft was referring to the total FoV area — not the per-axis measurements. A 43°×29° FoV is actually around 2.4x the area of 30°×17.5°.

This is an impressive leap forward and will make holographic objects feel much more immersive than before. But in perspective, it is still significantly narrower than even a typical VR headset. There is also a ways to go before either AR or VR headsets are capable of filling the entirety of human vision.

HoloLens 2 is the state of the art in augmented reality — using a custom designed laser MEMS display system. However AR technology still has a long way to go before becoming consumer friendly. Just like VR 10 years ago, AR will need several breakthroughs before it is truly ready for consumers. But based on what Microsoft showed us at MWC, we’ve never been more excited for AR’s future.

Tagged with: augmented reality, HoloLens 2, microsoft, microsoft hololens

The post HoloLens 2’s Field of View Revealed appeared first on UploadVR.

20 Feb 23:14

A DIY E-Ink Calendar Powered by a Raspberry Pi Zero W

by Cameron Coward

E-ink displays are fantastic to look at, but come with two serious caveats. Most of them are monochrome, and they’re very slow to refresh. Multicolor e-ink displays are available, but they refresh even slower. That means they’re only suited to a handful of specific applications, like in e-readers. Another device that they’re perfect for is a digital calendar, and Redditor Heyninclicks built their own using a Raspberry Pi.

In addition to the calendar functionality, this build also incorporates a weather display. Both are a great use for an e-ink display, since they don’t need to refresh very often and don’t need to refresh quickly when they do. In this case, that e-ink display is a 7.5" monochrome model made by Waveshare. It’s housed within a simple, but elegant, black picture frame.

A Raspberry Pi Zero W was used to update the display, and Heynineclicks created the graphics themselves. The weather data comes from Darksky, and the software to update the display was programmed specifically for this project. The final product looks great, and is a really practical way to always have a calendar and the weather on hand.


A DIY E-Ink Calendar Powered by a Raspberry Pi Zero W was originally published in Hackster Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

14 Feb 07:10

Vocalize.ai Acquired by Sensory

by Bret Kinsella

Vocalize.ai, a speech recognition benchmarking company, announced today it has been acquired by Sensory, a company known for on-device speech recognition, biometric security, and natural language understanding technology. Joe Murphy, the founder and CEO of Vocalize.ai, formerly worked at UL and saw an opportunity in the summer of 2017 to launch an independent testing and benchmarking company focused on the performance of voice assistants. Sensory was a customer of Vocalize.ai before the acquisition and the companies had discussed building out and sharing a testing studio. That consideration along with other synergies led to the acquisition discussions. Todd Mozer, CEO of Sensory Inc., commented:

“Sensory has always done in-house technology testing through simulations. However, we saw a growing need for an independent testing source that wasn’t influenced by our data or testing methods, that could also provide more real world, black-box testing. Vocalize.ai offers exactly what we needed and was in fact tremendously helpful in shaping our new TrulyHandsfree 6.0 release.”

Creating Common Evaluation Criteria for Voice Technologies

Since its founding, Vocalize.ai has conducted private studies for hardware and software makers as well as issued public reports on voice assistant performance. The company reported in September 2018 that Google Assistant was outpacing its peers in understanding accented speech and in June 2018 that SonosOne and the Amazon Echo Dot exhibited performance that audiologists would characterize as hearing loss. More recently, Vocalize.ai has focused on privately commissioned benchmarks but did collaborate with Voicebot.ai in the fall of 2018 on a study related to wake word “false positives” where voice assistants wake-up and start listening by accident.

The Vocalize.ai announcement says it will remain an independent company serving clients beyond Sensory. Deal terms were not disclosed and there is no record of previous investors. The company’s core asset is a testing software suite that automates numerous test protocols, most of them based on standards developed by audiologists. However, Mr. Murphy is also conducting tests beyond speech recognition performance that assess other capabilities of leading voice assistants.

Follow @bretkinsellaFollow @voicebotai

Sensory is Enabling Offline Smart Speakers with No Cloud Connectivity to Maximize Security

Google Assistant Takes Lead in Understanding Speakers with Accents

Sonos One, Eufy Genie and Amazon Echo Dot Exhibit Characteristics of Hearing Loss in New Study

The post Vocalize.ai Acquired by Sensory appeared first on Voicebot.

12 Feb 08:46

Baidu made a smart cat shelter that uses AI to tell cats and dogs apart

by Shannon Liao
The cat says: “Coming!” The camera says: “You’ve arrived, brother!”

China’s top search engine company Baidu made a smart cat shelter in Beijing that uses AI to verify when a cat is approaching and open its door. The cat shelter is heated and also offers cats food and water.

Besides running China’s main search engine, Baidu also works on AI tools in general and owns iQiyi, a Netflix-like rival that uses algorithms to determine what viewers may be interested in watching next. While cat shelters ordinarily seem out of the scope of what Baidu does, the company says that the idea first came to one employee, Wan Xi, who uncovered a small cat hiding in his car last winter and began to sympathize with the plight of other stray cats. Wan then apparently shut himself at home to develop software and work on a...

Continue reading…

11 Feb 17:18

Google Docs gets an API for task automation

by Frederic Lardinois

Google today announced the general availability of a new API for Google Docs that will allow developers to automate many of the tasks that users typically do manually in the company’s online office suite. The API has been in developer preview since last April’s Google Cloud Next 2018 and is now available to all developers.

As Google notes, the REST API was designed to help developers build workflow automation services for their users, build content management services and create documents in bulk. Using the API, developers can also set up processes that manipulate documents after the fact to update them, and the API also features the ability to insert, delete, move, merge and format text, insert inline images and work with lists, among other things.

The canonical use case here is invoicing, where you need to regularly create similar documents with ever-changing order numbers and line items based on information from third-party systems (or maybe even just a Google Sheet). Google also notes that the API’s import/export abilities allow you to use Docs for internal content management systems.

Some of the companies that built solutions based on the new API during the preview period include Zapier, Netflix, Mailchimp and Final Draft. Zapier integrated the Docs API into its own workflow automation tool to help its users create offer letters based on a template, for example, while Netflix used it to build an internal tool that helps its engineers gather data and automate its documentation workflow.

 

 

30 Jan 21:39

Waterproof Chemical Sensor Collects Biometric Data During Water Sports

by Cabe Atwell

Researchers from Northwestern University’s Rogers Research Group have developed a wearable biometric sensor that collects and analyzes athlete’s sweat, even while they are swimming underwater for prolonged periods. Our sweat carries a host of chemical information about our bodies, such as salt, sugar, hormones, drug, alcohol, and electrolyte levels, which are indicative of our overall health. Just like other demanding physical activities, swimming makes us sweat even while underwater, but there hasn’t been an effective way to collect and analyze it while under water.

The waterproof skin patch allows for monitoring biometrics during water sports. (📷: Rogers Research Group)

The researchers’ sweat sensor features a waterproof elastomeric moldable polymer circular patch that’s deformable and adheres to skin no matter the conditions. The underside of the patch has a tiny hole that allows sweat to enter; while a myriad of separate microfluidic channels push the sweat to an internal sensor where it is analyzed. An embedded NFC chip can then transmit the data to a mobile device that medical professionals can use to view the results.

The Rogers Research Group’s chemical sensor features an elastic patch with a tiny hole underneath that allows sweat to enter. The collected fluid travels through small channels to the sensor, where it is analyzed and stored. (📷: Rogers Research Group)

Each microfluidic channel serves as different miniature test lab — with one for fluid levels, another for chloride concentration, another for sweat loss, and so forth. The sweat is mixed with different chemicals within the micro-channels, causing it to change different colors, each representing that specific test, allowing athletes to see body chemistry changes in real-time.

Besides sending the collected data wirelessly to a mobile device for further analysis, the wearer could also take a pic of the patch, and an app could tell them if they need to drink more water or risk dehydration, something that can happen even while swimming.

The elastic polymer patch allows the sensor to deform to the skin creating a seal that prevents water from entering. (📷: Rogers Research Group)

The wearable sweat sensors are already in clinical use at the Lurie Children’s Hospital in Chicago, where they are used to screen newborns for cystic fibrosis through measuring chloride levels in their sweat. They will also be fielded to athletes to put them through ‘extreme test-cases’ before they become available to the market.


Waterproof Chemical Sensor Collects Biometric Data During Water Sports was originally published in Hackster Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

29 Jan 22:59

The Pentagon compiled research into invisibility cloaking, wormholes, and warp drive

by Makena Kelly

A document released this month revealed a secretive multimillion-dollar Department of Defense program from the late 2000s compiled research into invisibility cloaks, warp drive, and many other areas of fringe space science as part of a now-defunct program aimed at detecting and potentially explaining strange sightings in the Earth’s atmosphere.

The five-page document includes a list of papers written for the program, originally sent to two members of Congress last year. The pages were released on January 16th as a response to a Freedom of Information Act (FOIA) request from the Federation of American Scientists.

Between 2007 and 2012, the Defense Intelligence Agency (DIA) spent $22 million on this UFO program, which was formally known...

Continue reading…

28 Jan 22:17

MIT Creates Antennas for Wearables That Harvest Energy From Wi-Fi Signals

by Cameron Coward

Once of the biggest challenges in developing wearable devices is energy storage. Wearables, like any other mobile device, need to be as small and light as possible. That means that bulky, heavy batteries are a major constraint in their design. This new antenna design created by researchers at MIT, as well as many other institutions, solves that problem by harvesting the energy from the radio waves all around us.

Researchers have designed a flexible, battery-free “rectenna” — a device that converts energy from Wi-Fi signals into electricity. (📷: Christine Daniloff / MIT News)

That, by itself, isn’t a new concept; energy-collecting antennas, called rectennas, have been around for a long time. But, traditional rectenna designs have a lot of drawbacks: they’re relatively expensive by area, they’re rigid, and they can only harvest energy from a limited portion of the radio wave spectrum. This new design utilizes an inexpensive material called molybdenum disulfide (MoS2) that results in thin, flexible rectennas that can collect energy from a wide range of radio waves.

With the MoS2 material, they are able to build the rectifier portion of the antenna at just three atoms thick. That means these antennas can be incorporated into thin, lightweight, and even flexible wearable devices. Such an antenna can harvest electricity with up to 40% efficiency from wireless signals up to 10 gigahertz, which includes Wi-Fi and cellular signals that are always around us — but which usually just go to waste.

Because they’re inexpensive to construct, these new rectennas have potential in a wide range of industries. They could be used to power implantable medical devices where the safety of batteries is a concern. Thinking on a much larger scale, the researchers believe these could also be used to power entire smart roads, bridges, and other civil engineering structures.


MIT Creates Antennas for Wearables That Harvest Energy From Wi-Fi Signals was originally published in Hackster Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

05 Jan 13:03

Google’s Project Soli radar is sensitive enough to count sheets of paper and read Lego bricks

by James Vincent
Project Soli uses miniature radar to detect gestures.

What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.

Imagining exactly how this tech will be put to use is tricky, but a group of researchers from the University of St Andrews in Scotland are exploring its limitations. In a paper published last month, they show how Project Soli hardware can be used for a range of precise sensing tasks. These including counting the number of playing cards in a deck, measuring compass orientation, and...

Continue reading…

02 Jan 08:48

This awesome homemade jukebox is controlled by swipeable song cards

by Jacob Kastrenakes

In an era where everything is controlled by touchscreens and oblique voice commands, there’s something incredibly satisfying about a gadget with simple, tactile controls. That’s probably why designer Chris Patty’s homemade jukebox looks so charming: it’s controlled by physical cards, each printed with an artist and album art on the front, that you swipe to play a song.

Patty created the jukebox as a Christmas gift for his father, after his family decided to only swap handmade presents this year. He later posted a short video of the creation to Twitter, where he’s received enough positive responses that he’s working on an open source version of the software and instructions so that fans can make their own.

Continue reading…

19 Dec 22:37

Steppingstone VR Uses Multi-Platform Electromagnetic Propulsion To Fight Sim Sickness

by Jamie Feltham
Steppingstone VR Uses Multi-Platform Electromagnetic Propulsion To Fight Sim Sickness

Steppingstone VR thinks its new approach to VR locomotion might be the one to solve simulation sickness.

The company is working on a motion platform that uses electromagnetic propulsion to physically move players around as they stand/sit on a platform. You can see it in the early prototype video below; the platform gets its power supply from a specialized floor, a little like bumper cars, allowing it to quickly adapt and move in response to the player’s input in VR. The sensations of physically moving that the player feels should help to combat sickness in games with smooth locomotion such as Skyrim VR.

But this is just the first step (sorry) for Steppingstone VR. Over emails, CEO Samy Bensmida tells me that the consumer version of its product aims to include multiple moving platforms that users will be able to step onto. Tiles will move backward as you step onto them, in theory allowing you to physically walk around a massive game world without ever leaving the center of a space. You can see a similar concept in the video below, though Bensmida explains that this system uses wheels, whereas Steppingstone’s electromagnetic propulsion will allow it a greater deal of autonomy.

“You will walk all day long in Skyrim with your legs, no harness, and get all the congruent inertial cues,” Bensmida said.

And, yes, as expensive as it looks, Bensmida says the product is “100% consumer” with the aim of streamlining it to be viable for homes. Based on the prototype, there’s a lot of work to be done to get Steppingstone towards anywhere near something we’d consider making space for it and we’d still be concerned about the safety of navigating multiple moving platforms when essentially blindfolded in VR.

Still, Bensmida seems confident the team will pull it off, and is preparing a Kickstarter crowd-funding campaign to help it get there. It’s currently estimated to utilize a “consumer safe” 12V voltage and the campaign will likely run for around $150,000.

Would you put down electromagnetic flooring in your house if it meant complete and utter VR immersion?

Tagged with: Steppingstone VR

The post Steppingstone VR Uses Multi-Platform Electromagnetic Propulsion To Fight Sim Sickness appeared first on UploadVR.

19 Dec 07:29

Samsung’s stylish The Frame and Serif 4K TVs will soon come in more sizes with better picture quality

by Chris Welch

Ahead of CES, Samsung is announcing upcoming refreshes of its two most stylish 4K TVs, The Frame and Serif. These are lifestyle pieces that aim to make people rethink what a TV can and should look like. They don’t offer Samsung’s best picture performance — that’s still reserved for the proper QLED lineup — but they’re definitely good for attracting conversation in the home.

The Frame is being upgraded with an improved picture over its previous two iterations. The 2019 model will feature Samsung’s quantum dot display technology for a wider HDR color palette. Aside from offering a better picture, The Frame will also now come in a new 49-inch size. (Last year’s edition came in 43-, 55-, and 65-inch sizes.) Samsung markets The Frame to...

Continue reading…

16 Dec 18:56

These are the Planters You are Looking For!

by Geeks are Sexy

Etsy user RedwoodStoneworks uses silicone molds and a little plaster to make what have to be the most awesome planters that have ever been created. Don’t believe me? Check those out:

From RedwoodStoneworks:

Each piece is finely finished, I make each one from scratch, sanding and beveling all rough edges for a smooth finely finished look! All details for realistic finish are hand painted.

[Pop Culture Planters]

The post These are the Planters You are Looking For! appeared first on Geeks are Sexy Technology News.

14 Dec 07:18

These face-generating systems are getting rather too creepily good for my liking

by Devin Coldewey

Machine learning models are getting quite good at generating realistic human faces — so good that I may never trust a machine, or human, to be real ever again. The new approach, from researchers at Nvidia, leapfrogs others by separating levels of detail in the faces and allowing them to be tweaked separately. The results are eerily realistic.

The paper, published on preprint repository Arxiv (PDF), describes a new architecture for generating and blending images, particularly human faces, that “leads to better interpolation properties, and also better disentangles the latent factors of variation.”

What that means, basically, is that the system is more aware of meaningful variation between images, and at a variety of scales to boot. The researchers’ older system might, for example, produce two “distinct” faces that were mostly the same except the ears of one are erased and the shirt is a different color. That’s not really distinctiveness — but the system doesn’t know that those are not important pieces of the image to focus on.

It’s inspired by what’s called style transfer, in which the important stylistic aspects of, say, a painting, are extracted and applied to the creation of another image, which (if all goes well) ends up having a similar look. In this case, the “style” isn’t so much the brush strokes or color space, but the composition of the image (centered, looking left or right, etc.) and the physical characteristics of the face (skin tone, freckles, hair).

These features can have different scales, as well — at the fine side, it’s things like individual facial features; in the middle, it’s the general composition of the shot; at the largest scale, it’s things like overall coloration. Allowing the system to adjust all of them changes the whole image, while only adjusting a few might just change the color of someone’s hair, or just the presence of freckles or facial hair.

In the image at the top, notice how completely the faces change, yet obvious markers of both the “source” and “style” are obviously present, for instance the blue shirts in the bottom row. In other cases things are made up out of whole cloth, like the kimono the kid in the very center seems to be wearing. Where’d that come from? Note that all this is totally variable, not just a A + B = C, but with all aspects of A and B present or absent depending on how the settings are tweaked.

None of these are real people. But I wouldn’t look twice at most of these images if they were someone’s profile picture or the like. It’s kind of scary to think that we now have basically a face generator that can spit out perfectly normal looking humans all day long. Here are a few dozen:

It’s not perfect, but it works. And not just for people. Cars, cats, landscapes — all this stuff more or less fits the same paradigm of small, medium and large features that can be isolated and reproduced individually. An infinite cat generator sounds like a lot more fun to me, personally.

The researchers also have published a new data set of face data: 70,000 images of faces collected (with permission) from Flickr, aligned and cropped. They used Mechanical Turk to weed out statues, paintings and other outliers. Given the standard data set used by these types of projects is mostly red carpet photos of celebrities, this should provide a much more variable set of faces to work with. The data set will be available for others to download here soon.

13 Dec 13:21

Salesforce Opens Up Lightning Platform to World’s 7 Million+ Javascript Developers

by KevinSundstrom

In a move that will open its platform to the more than 7 million worldwide Javascript developers, Salesforce today announced its Lightning Web Components Framework; a technology that makes it possible for developers to use the Javascript programming language to customize browser-based web applications built on top of Salesforce’s core capabilities in the same way they might use Javascript to customize the browser side of any other web app.

10 Dec 21:20

Trello acquires Butler to add power of automation

by Ron Miller

Trello, the organizational tool owned by Atlassian, announced an acquisition of its very own this morning when it bought Butler for an undisclosed amount.

What Butler brings to Trello is the power of automation, stringing together a bunch of commands to make something complex happen automatically. As Trello’s Michael Pryor pointed out in a blog post announcing the acquisition, we are used to tools like IFTTT, Zapier and Apple Shortcuts, and this will bring a similar type of functionality directly into Trello.

Screenshot: Trello

“Over the years, teams have discovered that by automating processes on Trello boards with the Butler Power-Up, they could spend more time on important tasks and be more productive. Butler helps teams codify business rules and processes, taking something that might take ten steps to accomplish and automating it into one click,” Pryor wrote.

This means that Trello can be more than a static organizational tool. Instead, it can move into the realm of light-weight business process automation. For example, this could allow you to move an item from your To Do board to your Doing board automatically based on dates, or to share tasks with appropriate teams as a project moves through its life cycle, saving a bunch of manual steps that tend to add up.

The company indicated that it will be incorporating the Alfred’s capabilities directly into Trello in the coming months. It will make it available to all levels of users, including the free tier, but they promise more advanced functionality for Business and Enterprise customers when the integration is complete. Pryor also suggested that more automation could be coming to Trello. “Butler is Trello’s first step down this road, enabling every user to automate pieces of their Trello workflow to save time, stay organized and get more done.”

Atlassian bought Trello in 2017 for $425 million, but this acquisition indicates it is functioning quasi-independently as part of the Atlassian family.

10 Dec 08:31

China’s JD.com teams up with Intel to develop ‘smart’ retail experiences

by Jon Russell

Months after it landed a major $550 million investment from Google, China’s JD.com — the country’s second highest-profile investor behind Alibaba — has teamed up with another U.S. tech giant: Intel.

JD and Intel said today that they will set up a “lab” focused on bringing internet-of-things technology into the retail process. That could include new-generation vending machines, advertising experiences, and more.

That future is mostly offline — or, in China tech speak, ‘online-to-offline’ retail — but combining the benefits of e-commerce with brick and mortar physical retail shopping. Already, for example, customers can order ahead of time and come in store for collection, buy items without a checkout, take advantage of ‘smart shelves’ or simply try products in person before they buy them.

Indeed, TechCrunch recently visited a flagship JD ‘7Fresh’ store in Beijing and reported on the hybrid approach that the company is taking.

JD is backed by Chinese internet giant Tencent and valued at nearly $30 billion. The company already works with Intel on personalized shopping experiences, but this new lab is focused on taking things further with new projects and working to “facilitate their introduction to global markets.”

“The Digitized Retail Joint Lab will develop next-generation vending machines, media and advertising solutions, and technologies to be used in the stores of the future, based on Intel architecture,” the companies said in a joint announcement.

JD currently operates three 7Fresh stores in China but it is aiming to expand that network to 30. It has also forayed overseas, stepping into Southeast Asia with the launch of cashier-less stores in Indonesia this year.

07 Dec 12:49

Jibo Shuts Down, Selling Off Robot Parts

by Bret Kinsella

Jibo was one of the best funded and most publicized social robots around. Despite that, it appears the company and robot are no more. An investment management firm in New York purchased the assets on June 20, 2018. According to The Robot Report:

“Social robot maker Jibo has sold its IP assets. According to a former Jibo executive with direct knowledge of the situation, New York-based investment management firm SQN Venture Partners is the new owner.”

Jibo Layoffs in June Preceded Asset Sell-off

Around that same time in June, BostInno reported that Jibo’s California office was marked as permanently closed and layoffs at the Boston Headquarters were “significant.” Co-founded in 2012 by MIT’s Cynthia Breazeal, the company raised $61 million from more than a dozen different investors including blue-chip VC firm Charles River Ventures. Jibo was valued at $200 million and had 95 employees as late as November 2016 according to Pitchbook.

The robot finally came to market in October 2017 but apparently did not establish enough momentum or favorable user reviews to overcome its $899 price point. It also apparently was delayed so much that Indigogo required the company to refund over $3 million in pre-orders before the robot came to market. MediaPost’s Chuck Martin summed up the situation succinctly last week saying:

“The closure comes as no surprise since the company laid off much of its workforce in June…In July, another social robot named Kuri was shut down. That device was from the Bosch Startup Platform and was an award winner launched at CES 2017. Social and home robots are getting better as robot makers continue their quest to create a robot that consumers not only want but also will pay for.”

It Couldn’t Do Much More Than a Headless Robot, i.e. A Smart Speaker

There were many aspects of Jibo’s personality that made it unique. Its physical world interaction was compelling, but design doesn’t always win. Sometimes great design can’t overcome a high price point, limited utility, and consumers not quite sure why they need the product. While Jibo was trying to find its way, the founder’s vision was being realized in a less sophisticated way by headless robots, also known as smart speakers. For $899, Jibo could tell you the news and weather, set a timer, and play music or tell you a joke all in response to natural language interaction. For less than $30 this week, Amazon Echo Dot and Google Home Mini can perform those tasks and much more.

Smart speakers have brought voice accessible media, information, entertainment, and other utilities into the home for a nominal cost and you don’t have to worry about the devices accidentally rolling down the stairs. In some ways, smart speakers are paving the way for social robots by training consumers in new behaviors that involve voice interaction with computers. However, these devices are also delivering the low hanging fruit of benefits that many social robots sought to provide. This means social robots have to deliver clearly differentiated and meaningfully beneficial value beyond what smart speaker-based voice assistants offer today. Those use cases surely exist and we have seen several interesting robot applications in business settings. The unanswered question is what benefits will spur consumers to want and pay for social robots.

Follow @bretkinsella Follow @voicebotai

Amazon Echo Devices on Backorder Worldwide, Echo Dots Particularly Hard to Find in the U.K.

Social Robot Jibo Begins Shipping

The post Jibo Shuts Down, Selling Off Robot Parts appeared first on Voicebot.

07 Dec 12:47

7 things to think about voice

by David Riggs

The next few years will see voice automation take over many aspects of our lives. Although voice won’t change everything, it will be part of a movement that heralds a new way to think about our relationship with devices, screens, our data and interactions.

We will become more task-specific and less program-oriented. We will think less about items and more about the collective experience of the device ecosystem they are part of. We will enjoy the experiences they make possible, not the specifications they celebrate.

In the new world I hope we relinquish our role from the slaves we are today to be being back in control.

Voice won’t kill anything

The standard way that technology arrives is to augment more than replace. TV didn’t kill the radio. VHS and then streamed movies didn’t kill the cinema. The microwave didn’t destroy the cooker.

Voice more than anything else is a way for people to get outputs from and give inputs into machines; it is a type of user interface. With UI design we’ve had the era of punch cards in the 1940s, keyboards from the 1960s, the computer mouse from the 1970s and the touchscreen from the 2000s.

All four of these mechanisms are around today and, with the exception of the punch card, we freely move between all input types based on context. Touchscreens are terrible in cars and on gym equipment, but they are great at making tactile applications. Computer mice are great to point and click. Each input does very different things brilliantly and badly. We have learned to know what is the best use for each.

Voice will not kill brands, it won’t hurt keyboard sales or touchscreen devices — it will become an additional way to do stuff; it is incremental, not cannibalistic.

We need to design around it

Nobody wanted the computer mouse before it was invented. In fact, many were perplexed by it because it made no sense in the previous era, where we used command lines, not visual icons, to navigate. Working with Nokia on touchscreens before the iPhone, the user experience sucked because the operating system wasn’t designed for touch. 3D Touch still remains pathetic because few software designers got excited by it and built for it.

What is exciting about voice is not using ways to add voice interaction to current systems, but considering new applications/interactions/use cases we’ve never seen.

At the moment, the burden is on us to fit around the limitations of voice, rather than have voice work around our needs.

A great new facade

Have you ever noticed that most company desktop websites are their worst digital interface; their mobile site is likely better and the mobile app will be best. Most airline or hotel or bank apps don’t offer pared-down experiences (like was once the case), but their very fastest, slickest experience with the greatest functionality. What tends to happen is that new things get new cap ex, the best people and the most ability to bring change.

However, most digital interfaces are still designed around the silos, workflows and structures of the company that made them. Banks may offer eight different ways to send money to someone or something based around their departments; hotel chains may ask you to navigate by their brand of hotel, not by location.

The reality is that people are task-oriented, not process-oriented. They want an outcome and don’t care how. Do I give a crap if it’s Amazon Grocery or Amazon Fresh or Amazon Marketplace? Not one bit. Voice allows companies to build a new interface on top of the legacy crap they’ve inherited. I get to “send money to Jane today,” not press 10 buttons around their org chart.

It requires rethinking

The first time I showed my parents a mouse and told them to double-click on it I thought they were having a fit on it. The cursor would move in jerks and often get lost. The same dismay and disdain I once had for them, I now feel every time I try to use voice. I have to reprogram my brain to think about information in a new way and to reconsider how my brain works. While this will happen, it will take time.

What gets interesting is what happens to the 8-year-olds who grow up thinking of voice first, what happens when developing nations embrace tablets with voice not desktop PCs to educate. When people grow up with something, their native understanding of what it means and what it makes possible changes. It’s going to be fascinating to see what becomes of this canvas.

Voice as a connective layer

We keep being dumb and thinking of voice as being the way to interact with “a” machine and not as a glue between all machines. Voice is an inherently crap way to get outputs; if a picture states a thousand words, how long will it take to buy a t-shirt. The real value of voice is as a user interface across all devices. Advertising in magazines should offer voice commands to find out more. You should be able to yell at the Netflix carousel, or at TV ads to add products to your shopping list. Voice won’t be how we “do” entire things, it will be how we trigger or finish things.

Proactivity

We’ve only ever assumed we talked to devices first. Do I really want to remember the command for turning on lights in the home and utter six words to make it happen? Do I want to always be asking. Assuming devices are select in when they speak first, it’s fun to see what happens when voice is proactive. Imagine the possibilities:

  • “Welcome home, would you like me to select evening lighting?”
  • “You’re running late for a meeting, should I order an Uber to take you there?”
  • “Your normal Citi Bike station has no bikes right now.”
  • “While it looks sunny now, it’s going to rain later.”

Automation

While many think we don’t want to share personal information, there are ample signs that if we get something in return, we trust the company and there is transparency, it’s OK. Voice will not develop alone, it will progress alongside Google suggesting emails replies, Amazon suggesting things to buy, Siri contextually suggesting apps to use. We will slowly become used to the idea of outsourcing our thinking and decisions somewhat to machines.

We’ve already outsourced a lot; we can’t remember phone numbers, addresses, birthdays — we even rely on images to jar our recollection of experiences, so it’s natural we’ll outsource some decisions.

The medium-term future in my eyes is one where we allow more data to be used to automate the mundane. Many think that voice is asking Alexa to order Duracell batteries, but it’s more likely to be never thinking about batteries or laundry detergent or other low consideration items again nor the subscriptions to be replenished.

There is an expression that a computer should never ask a question for which it can reasonably deduce the answer itself. When a technology is really here we don’t see, notice or think about it. The next few years will see voice automation take over many more aspects of our lives. The future of voice may be some long sentences and some smart commands, but mostly perhaps it’s simply grunts of yes.

06 Dec 21:21

On The Naughty/Nice List Sweater

by staff

Walking the thin line between naughty and nice is easier than ever when you show up dressed in this naughty/nice list sweater. The eye-catching design lets you flip-flop between the naughty and nice list depending on your current mood.

Check it out

$19.58

05 Dec 16:48

I Crashed A Mixed Reality Go Kart Into A Real Barrier

by Ian Hamilton
I Crashed A Mixed Reality Go Kart Into A Real Barrier

I drove 125 miles to K1 Speed in the Los Angeles area coasting at 70 miles per hour most of the way. Now I’m looking at one of K1’s karts on a real-world race track. The seat is low to the ground and I sit down, stretching out my legs on either side of the vehicle and wondering if traditional driving experience will translate.

The kart features a temporary rigging to attach a computer and Oculus Rift VR headset. The speed of the kart is remotely adjustable by the system Master of Shapes is demonstrating. As part of this rigging, lights effectively broadcast the kart’s position to cameras overhead spanning the length of the winding track. There’s even a button on the wheel that could deliver one of the world’s first mixed reality versions of something like Mario Kart.

Sure, it is amazing to wear a VR headset so you can sit in Mushroom Kingdom while seated on a real-world motion platform. But that’s a different caliber of experience from the one I’m testing, which will move my body through the real world in an accurate feedback loop with the way I push the pedals and turn the wheel. It is similar to the “mixed reality” experience we saw in the Oculus Arena at the most recent Oculus Connect VR developer’s conference, which incorporated real-world mapping. Except this time I’ll be moving through real space in a vehicle under my control.

Which brings me back to that button on the wheel — the one that “could deliver one of the world’s first mixed reality versions of something like Mario Kart.” Representatives from Master of Shapes told me not to push the button. They were explicit about it before I got in the kart. The button was intended entirely for development purposes at the moment I sat down.

One day there could be races here at K1 where a kid too young to drive a kart on their own could grab a gamepad and log into the same race as their elder sibling out on the actual “speedway.” One day that button on the wheel could launch a virtual weapon to slow down another player’s kart.

I press down on the pedal and…

Not long after the video above ends there’s a hard left turn and, in my growing confidence blindfolded to the real world, I move my hands into a new position. I should remind you again they told me not to push the button. In fact, they even warned me what would happen if I did. The virtual world would rotate 90 degrees off the physical barriers of the real world.

“Oh ok,” I thought at the time. “That’s bad. Don’t touch the button. Now let me drive the thing.”

So I’m hurtling around that corner and suddenly the world snaps into a new position. In front of my eyes now, directly ahead, is the railing of the virtual track. I panic and can’t remember which foot to use to brake the kart.

Instead, I brace and hope for the best.

I seem to be fine for a few seconds and then BAM!

I took the Rift off and laughed. They told me how to put the kart in reverse and we wheeled it back to the starting line for a reset. The second time, I went slow for the first lap and then really pressed the pedal down for the second one. It all worked fine for a few laps as I came back to where I started in the real world.

Tagged with: Master of Shapes

The post I Crashed A Mixed Reality Go Kart Into A Real Barrier appeared first on UploadVR.

04 Dec 07:26

AsReader RFID Reader/Writer, Barcode Scanner, SoftScan and more

by Charbax

At the 2018 IDTechEx Show! in Santa Clara, AsReader, Inc. showcases a variety of hardware consisting of RFID Reader/Writers, 1D and 2D Barcode Scanners and an all-new medical grade battery/wireless charging-sled with case. From a pocket-sized AsReader Barcode Scanner to the 10m/32ft long-distance GUN-Type RFID Reader and/or Barcode Scanner, AsReader hardware is compatible with most iOS devices including: iPhone 8 Plus/7Plus/6sPlus/6Plus, iPhone 8/7/6s/6, iPhone SE/5s/5, iPod touch 6th/5th Generation and iPad mini 3/2/1. AsReader’s handheld sleds are available with a white or black case for tracking logistics, healthcare patients & medications, retail inventory cycle-counts &markdowns, and event management. For standard barcode scanner, a UHF RFID Reader/Writer or an HF/NFC Reader, come with a royalty-free SDK with APIs to get connect with other software. AsReader also takes orders for Android users with a small MoQ.

04 Dec 07:23

Google is building digital art galleries you can step into

by Lucas Matney

Google wants to help you take a closer look at the art world.

The company’s Arts & Culture app has long been one of the company’s cooler niche apps and one that I often feel guilty about overlooking every time I rediscover it. Today, the company has added another experience into the mix focused on collecting the known works of Dutch master artist Johannes Vermeer and curating them in a single place.

The feature looks a lot like many of the company’s other deep dives, including listicles of factoids, interviews with experts and editorials. What makes this presentation unique is that the company actually constructed a miniature 3D art gallery that can utilize your phone’s AR functionality to plop into physical space in front of you.

With ARCore or ARKit, you can move through the “Pocket Gallery” and get close to the high-resolution captures of the paintings while also bringing up information about the works.

Having just tried it, this is one of those things that honestly doesn’t make a ton of sense to do with phone AR. Having a fully rendered gallery pop on your coffee table is an interesting gimmick, but they probably could have ditched the AR for a fully rendered 3D environment that’s more of a traversable object or just left the immersive views for VR and stuck with 2D exploration on your phone.

Nevertheless, it all makes for some interesting experimentation, and it’s just cool to see Google trying out new things with experiencing digital art in a more immersive way. Google’s Arts & Culture app is available on iOS and Android.