Shared posts

01 Jun 22:35

Celebrities Pose Along With Their Younger Selves [Gallery]

by Geeks are Sexy

This series of photoshops by Dutch graphic designer Ard Gelinck shows various celebrities posing along with a younger version of themselves! Fantastic work!

[Source: Ard Gelinck | Via GG]

The post Celebrities Pose Along With Their Younger Selves [Gallery] appeared first on Geeks are Sexy Technology News.

09 Feb 16:50

Badass Roleplaying Battle Dogs [Gallery]

by Geeks are Sexy

Check out this series of badass battle dogs by artist Nikita Orlov.

[Source: Nikita Orlov on Bored Panda]

The post Badass Roleplaying Battle Dogs [Gallery] appeared first on Geeks are Sexy Technology News.

08 Feb 21:17

Chateau Picard Wines are The Ultimate Collectibles For Star Trek Fans

by Geeks are Sexy

We all know that Jean-Luc Picard is an accomplished vintner, so it stands to reason that someone would release a highly collectible series of wines sporting the name of the iconic Star Trek captain. For those interested, Star Trek Wines offers two limited-edition wines: A Chateau Picard Cru Bordeaux, bottled at Chateau Picard in France, and a Special Reserve United Federation of Planets Old Vine Zinfandel.

The true back story of Chateau Picard wine is that for several generations in Bordeaux France, the original Chateau Picard winery has been producing world-class highly regarded and sought after Cru Bourgeois Bordeaux. Working with Chateau Picard’s winemakers in France, Star Trek Wines crafted this limited-edition release featuring the Jean-Luc Picard family label shown in the new series Star Trek: Picard with the wine from the multi-generational vineyard in Bordeaux.

Elegant, stately and dignified are but a few words to describe the wine that Federation dignitaries might enjoy at their gatherings. It has aromatics of concentrated strawberry, blackberry and plum preserves with a chewy-layered mid-palate filled with hints of white peppercorn, sweet red and black fruit.

[Star Trek Wines | Via GG]

The post Chateau Picard Wines are The Ultimate Collectibles For Star Trek Fans appeared first on Geeks are Sexy Technology News.

01 Feb 19:37

Landscapes Turned Upside Down

by Pauline

À l’occasion du lancement de la ligne Dreamliner de la compagnie aérienne United Airlines en Australie, le studio Cream Electric Art établi à Sydney a réalisé une campagne publicitaire qui renverse le monde. À travers une série d’affiches, ces photomontages offrent une vue surprenante et inédite : à la manière d’un livre pop-up, les paysages occupent à la fois le premier plan et l’horizon, cette fois ci vus du ciel. Intitulée “Dreamers Welcome”, la campagne a été dirigée par le directeur créatif Cameron Hearne, avec le photographe Jeffrey Milstein.

24 Dec 22:36

Golden Snitch Globe Light turns on with a single touch

by The Gadget Flow Editors
The globe light is on a desk.Display your Harry Potter pride with the Golden Snitch Globe Light. This dome-shaped desk lamp activates with a single touch, just as the snitch closes up once it’s touched by the seeker. Simply tap the top of the light, and it illuminates to brighten up your room or workspace. Similarly, you can gently touch this golden snitch light to turn it off again. Use it on your nightstand, desk, or at your entryway to welcome guests to your home. Officially..
10 Dec 13:14

North ending production of current Focals smart glasses to focus on Focals 2.0

by Darrell Etherington

Smart glasses maker North announced today that it will be ending production of its first-generation Focals glasses, which it brought to market for consumers last year. The company says it will instead shift its focus to Focals 2.0, a next-generation version of the product, which it says will ship starting in 2020.

Focals are North’s first product since rebranding the company from Thalmic Labs and pivoting from building smart gesture control hardware to glasses with a built-in heads-up display and smartphone connectivity. CEO and founder Stephen Lake told me in a prior interview that the company realized in developing its Myo gesture control armband that it was actually more pressing to develop the next major shift in computing platform before tackling interface devices for said platforms, hence the switch.

Focals 2.0 will be “at a completely different level” and “the most advanced smart glasses ever made,” Lake said in a press release announcing the new generation device. In terms of how exactly it’ll improve on the original, North isn’t sharing much but it has said that its made the 2.0 version both lighter and “sleeker,” and that it’ll offer a much sharper, “10x improved” built-in display.

North began selling its Focals smart glasses via physical showrooms that it opened first in Brooklyn and Toronto. These, in addition to a number of pop-up showroom locations that toured across North America, provided in-person try-ons and fittings for the smart glasses, which must be tailor-fit for individual users in order to properly display content from their supported applications. More recently, North also added a Showroom app for iOS devices, that included custom sizing powered by more recent iPhone front-facing depth sensing camera hardware.

North’s first-generation Focals smart glasses.

To date, North hasn’t revealed any sales figures for its initial Focals device, but the company did reduce the price of the glasses form $999 to just under $600 (without prescription) relatively soon after launch. Their cost, combined with the requirement for an in-person fitting prior to purchase (until the introduction of the Showroom app) and certain gaps in the product feature set like an inability to support iMessage on iOS natively, all point to initial sales being relatively low volume, however.

To North’s credit, Focals are the first smart glasses hardware that manage to have a relatively inconspicuous look. Despite somewhat thicker than average arms on either side where the battery, projection and computing components are housed, Focals resemble thick acrylic plastic frames of the kind popularized by Warby Parker and other standard glasses makers.

With version 2.0, it sounds like Focals will be making even more progress in developing a design that hews closely to standard glasses. One of the issues also cited by some users with the first-generation product was a relatively fuzzy image produced by the built-in projector, which required specific calibration to remain in focus, and it sounds like they’re addressing that, too.

The Focals successor will still have an uphill battle when it comes to achieving mass appeal, however. It’s unlikely that cost will be significantly reduced, though any progress it can make on that front will definitely help. And it still either requires non-glasses wearers to opt for regularly donning specs, or for standard glasses wearers to be within the acceptable prescription range supported by the hardware, and to be willing to spend a bit more for connected glasses features.

The company says the reason it’s ending Focals 1.0 production is to focus on the 2.0 rollout, but it’s not a great sign that there will be a pause in between the two generations in terms of availability. Through its two iterations as a company, Thalmic Labs and now North have not had the best track record in terms of developing hardware that has been a success with potential customers – Focals 2.0, whenever they do arrive, will have a lot to prove in terms of iterating enough to drive significant demand.

09 Dec 20:23

Blending Realities with the ARCore Depth API

by Google Developers
Posted by Shahram Izadi, Director of Research and Engineering

ARCore, our developer platform for building augmented reality (AR) experiences, allows your devices to display content immersively in the context of the world around us-- making them instantly accessible and useful.
Earlier this year, we introduced Environmental HDR, which brings real world lighting to AR objects and scenes, enhancing immersion with more realistic reflections, shadows, and lighting. Today, we're opening a call for collaborators to try another tool that helps improve immersion with the new Depth API in ARCore, enabling experiences that are vastly more natural, interactive, and helpful.
The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.
Example depth map

Example depth map, with red indicating areas that are close by, and blue representing areas that are farther away.

One important application for depth is occlusion: the ability for digital objects to accurately appear in front of or behind real world objects. Occlusion helps digital objects feel as if they are actually in your space by blending them with the scene. We will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore-enabled Android devices today.

A virtual cat with occlusion off and with occlusion on.

We’ve also been working with Houzz, a company that focuses on home renovation and design, to bring the Depth API to the “View in My Room 3D” experience in their app. “Using the ARCore Depth API, people can see a more realistic preview of the products they’re about to buy, visualizing our 3D models right next to the existing furniture in a room,” says Sally Huang, Visual Technologies Lead at Houzz. “Doing this gives our users much more confidence in their purchasing decisions.”
The Houzz app with occlusion is available today.
The Houzz app with occlusion is available today.
In addition to enabling occlusion, having a 3D understanding of the world on your device unlocks a myriad of other possibilities. Our team has been exploring some of these, playing with realistic physics, path planning, surface interaction, and more.

Physics, path planning, and surface interaction examples.

When applications of the Depth API are combined together, you can also create experiences in which objects accurately bounce and splash across surfaces and textures, as well as new interactive game mechanics that enable players to duck and hide behind real-world objects.
A demo experience we created where you have to dodge and throw food at a robot chef
A demo experience we created where you have to dodge and throw food at a robot chef.
The Depth API is not dependent on specialized cameras and sensors, and it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion—the ability to occlude behind moving objects.
We’ve only begun to scratch the surface of what’s possible with the Depth API and we want to see how you will innovate with this feature. If you are interested in trying the new Depth API, please fill out our call for collaborators form.
08 Dec 18:30

Home Theater Leather Sofa

by staff

Do movie night at home right by watching your favorite flicks from the comfort of this home theater leather sofa. This incredibly comfy sofa is loaded with features like adjustable headrests, adjustable lumbar support, a pull-down table, and even a wireless charger.

Check it out


06 Dec 06:51

Gift Guide: Gifts for the promising podcaster

by Brian Heater

Welcome to TechCrunch’s 2019 Holiday Gift Guide! Need help with gift ideas? We’re here to help! We’ll be rolling out gift guides from now through the end of December. You can find our other guides right here.

Spotify reportedly spent nearly $500 million on podcasts in 2019. The good news is that the rest of us can get into that world for considerably less. In fact, the low barrier of entry has always been one of podcasting’s primary selling points.

Before we go any further, I’d recommend everyone check out our on-going series “How I Podcast,” in which top podcasters give a peek behind the curtain at their podcasting rigs. The standard disclaimer applies here, as ever: there’s no one size fits all solution to any of this. One’s needs will vary greatly depending on how much you’re willing to spend and what the recording setup is (remote vs. in-person, the number of guests you usually have, etc.)

If you’re just getting started, just start. You don’t need high end mics or mixing boards — even if you’re just recording into your iPhone, it’s better to get the ball rolling than to worry about perfect fidelity right off the bat.

But for you or anyone on your list who’s looking to get a bit more serious about podcasting in 2020, this should be a good place to start. It’s easier than ever to make a show sound professional, one upgrade at a time. What follows is a selection of software and gear for anyone looking to step up their game.

(Oh, and while we’re talking about podcasts… check out my weekly interview show, RiYL)

This article contains links to affiliate partners where available. When you buy through these links, TechCrunch may earn an affiliate commission.

Zencastr Subscription


There are a ton of different compelling software choices for today’s podcaster, including Spotify’s Anchor for real beginners, up to Adobe’s Premier for the pros. For remote recorders, I recommend Zencastr. Our own Original Content podcast uses the software, and I’ve had pretty good experiences with its real-time audio levels and cloud-based recording. Gone are the days of hacking something together out of Skype calls.

Price: $20 per month

Rodecaster Pro


Introduced last year, the Rodecaster Pro is the most expensive item on the list, but it also just might be the most indispensable for anyone looking to set up an at-home studio. It’s a brilliant little multitrack board, and quite frankly, I’m surprised there isn’t more competition in this space yet. For the beginning podcaster up through everyone who’s ready to sign a contract with NPR, the Rodecaster is a terrific, user-friendly solution for recording more than two people face-to-face.

Price: $599 on Amazon

Zoom H4N PRO Digital Multitrack Recorder


When my Tascam finally gave up the ghost earlier this year, I decided to try something new. I’m glad I did. While it’s true that most of these multitrack records haven’t changed much in the past decade, Zoom offers a couple of key advantages. Most notable is far better real-time level tracking. I produce my podcast on the fly as I’m recording, and the ability to quickly monitor volume at a glance is paramount. I take the H4N with me wherever I travel, along with a pair of external mics.

Price: $219 on Amazon

AKG Lyra


Logitech’s Blue has had the USB market cornered for some time now, but Samsung-owned AKG offers compelling alternatives at an even more compelling price. The $149 Lyra is certainly the best looking of the bunch. It’s got a USB-C input, real-time monitoring and far clearer settings for a variety of different recording methods. I’ve been playing around with the mic a bit and will offer a more thorough writeup soon, but in the meantime, I can attest that it’s a great sounding mic for remote recordings.

Price: $150 on Amazon

Blue Raspberry

The Lyra’s biggest drawback, however, is its size. Blue’s Raspberry can’t compete on the sound front, but it’s far more portable. More than once I’ve found myself sticking it in a backpack and a suitcase. Blue also offers up a mini-version of the Yeti at a fraction of the price, but this older Blue mic simply sounds better.

Price: $149 on Amazon

Shure SM7B

At over twice the price, the Shure SM7B is a bigger commitment than the previous options. But as the choice of pro-level podcasters all over, Shure’s mics are a studio gold standard. The more portable SM-57s are also a terrific (and lower cost) option for more portable rigs. You’ll get a great sounding show either way.

Price: $399 on Amazon

Sennheiser Momentum

Whether it’s for editing or just minimizing echoes during interviews, you’ll want a good pair of headphones. There’s no shortage of over/on-ear options, but I’m partial to these Sennheisers for their combination of sound, price and classic good looks.

Price: $199 on Amazon

06 Dec 06:48

Why AWS is selling a MIDI keyboard to teach machine learning

by Frederic Lardinois

Earlier this week, AWS launched DeepComposer, a set of web-based tools for learning about AI to make music and a $99 MIDI keyboard for inputting melodies. That launch created a fair bit of confusion, though, so we sat down with Mike Miller, the director of AWS’s AI Devices group, to talk about where DeepComposer fits into the company’s lineup of AI devices, which includes the DeepLens camera and the DeepRacer AI car, both of which are meant to teach developers about specific AI concepts, too.

The first thing that’s important to remember here is that DeepComposer is a learning tool. It’s not meant for musicians — it’s meant for engineers who want to learn about generative AI. But AWS didn’t help itself by calling this “the world’s first machine learning-enabled musical keyboard for developers.” The keyboard itself, after all, is just a standard, basic MIDI keyboard. There’s no intelligence in it. All of the AI work is happening in the cloud.

“The goal here is to teach generative AI as one of the most interesting trends in machine learning in the last 10 years,” Miller told us. “We specifically told GANs, generative adversarial networks, where there are two networks that are trained together. The reason that’s interesting from our perspective for developers is that it’s very complicated and a lot of the things that developers learn about training machine learning models get jumbled up when you’re training two together.”

With DeepComposer, the developer steps through a process of learning the basics. With the keyboard, you can input a basic melody — but if you don’t have it, you also can use an on-screen keyboard to get started or use a few default melodies (think Ode to Joy). From a practical perspective, the system then goes out and generates a background track for that melody based on a musical style you choose. To keep things simple, the system ignores some values from the keyboard, though, including velocity (just in case you needed more evidence that this is not a keyboard for musicians). But more importantly, developers can then also dig into the actual models the system generated — and even export them to a Jupyter notebook.

For the purpose of DeepComposer, the MIDI data is just another data source to teach developers about GANs and SageMaker, AWS’s machine learning platform that powers DeepComposer behind the scenes.

“The advantage of using MIDI files and basing out training on MIDI is that the representation of the data that goes into the training is in a format that is actually the same representation of data in an image, for example,” explained Miller. “And so it’s actually very applicable and analogous, so as a developer look at that SageMaker notebook and understands the data formatting and how we pass the data in, that’s applicable to other domains as well.”

That’s why the tools expose all of the raw data, too, including loss functions, analytics and the results of the various models as they try to get to an acceptable result, etc. Because this is obviously a tool for generating music, it’ll also expose some of the data about the music, like pitch and empty bars.

“We believe that as developers get into the SageMaker models, they’ll see that, hey, I can apply this to other domains and I can take this and make it my own and see what I can generate,” said Miller.

Having heard the results so far, I think it’s safe to say that DeepComposer won’t produce any hits soon. It seems pretty good at creating a drum track, but bass lines seem a bit erratic. Still, it’s a cool demo of this machine learning technique, even though my guess is that its success will be a bit more limited than DeepRacer, which is a concept that is a bit easier to understand for most since the majority of developers will look at it, think they need to be able to play an instrument to use it, and move on.

Additional reporting by Ron Miller.

05 Dec 10:28

Coca-Cola : des bouteilles Star Wars qui s’illuminent lorsqu’on les touche

by Antonin Gratien

À l’occasion de la sortie imminente du 9e volet de la saga Star Wars, L’Ascension de Skywalker, Coca-Cola a vu les choses en grand. Ou plutôt, en lumineux. En effet, la marque de soda aéericaine vient de dévoiler une édition limitée de ses bouteilles. Leur particularité ? La partie des emballages où figure un sabre laser peut s’allumer à volonté.

Instagram Photo

Juste de la technologie

Non, ce n’est pas la force qui permet de réaliser ce prodige. Le bandeau des bouteilles est en fait équipé d’une diode électroluminescente organique, plus communément appelée par son acronyme anglais : OLED. Ce composant électrique permettant de créer de la lumière est notamment utilisé dans les IPhone X, et les Samsung Galaxy S9. C’est la première fois au monde que cette technologie est utilisée sur des bouteilles en plastique.

Pour l’activer, il suffit de presser son doigt à un endroit précis de l’emballage. Attention, la durée de vie du mécanisme se limite à environ 500 utilisations. La bouteille se décline en deux versions : une où apparaît Rey et son sabre bleu, l’autre où se tient Kylo Ren équipé de son épée rouge.

Instagram Photo

Une édition (très) limitée

Seules 8000 versions de ces modèles d’exception existent. Ils ne seront, hélas, disponibles qu’à Singapour. Pour en acquérir une, impossible de passer par les commerces classiques. Il faudra participer à une « Chasse galactique » en s’inscrivant dès le 6 décembre sur le site officiel de Coca-Cola Singapour, puis répondre à une série d’énigmes. Chaque bonne réponse donnera un indice sur une localisation secrète où récupérer la précieuse bouteille.

L’Ascension de Skywalker sortira en salles le 18 décembre en France. Il s’agira du clap de fin pour la saga Rey/Kylo Ren. Pour autant, il ne s’agit certainement pas d’un adieu à l’univers de George Lucas adapté sur grand écran. En effet, Rian Johnson, réalisateur du Star Wars Épisode VIII : Les derniers Jedis (2017), est actuellement en discussion est avec Disney pour préparer une nouvelle trilogie.

L’article Coca-Cola : des bouteilles Star Wars qui s’illuminent lorsqu’on les touche est apparu en premier sur GOLEM13.FR.

01 Dec 20:11

Creating Easy Glass Circuit Boards At Home

by Tom Nardi

This tip for creating glass substrate circuit boards at home might hew a bit closer to arts and crafts than the traditional Hackaday post, but the final results of the method demonstrated by [Heliox] in her recent video are simply too gorgeous to ignore. The video is in French, but between YouTube’s attempted automatic translation and the formidable mental powers of our beloved readers, we don’t think it will be too hard for you to follow along after the break.

The short version is that [Heliox] loads her Silhouette Cameo, a computer-controlled cutting machine generally used for paper and vinyl, with a thin sheet of copper adhered to a backing sheet to give it some mechanical strength. With the cutting pressure of the Cameo dialed back, the circuit is cut out of the copper but not the sheet underneath, and the excess can be carefully peeled away.

Using transfer paper, [Heliox] then lifts the copper traces off the sheet and sticks them down to a cut piece of glass. Once it’s been smoothed out and pushed down, she pulls the transfer paper off and the copper is left behind.

From there, it’s just a matter of soldering on the SMD components. To make it a little safer to handle she wet sands the edges of the glass to round them off, but it’s still glass, so we wouldn’t recommend this construction for anything heavy duty. While it might not be the ideal choice for your next build, it certainly does looks fantastic when mounted in a stand and blinking away like [Heliox] shows off at the end.

Ironically, when compared to some of the other methods of making professional looking PCBs at home that we’ve seen over the years, this one might actually be one of the easiest. Who knew?

[Thanks to James for the tip.]

29 Nov 15:31

Il encapsule des scènes de jeux vidéo dans des cubes pour rendre hommage aux consoles rétro

by Mélanie D.

On peut le dire : la Game Boy est un symbole iconique de la pop culture des 90s ! Et surtout, dans nos petits coeurs nostalgiques, elle reste l’une des consoles les plus emblématiques de Nintendo. D’ailleurs, celle-ci est tellement ancrée dans nos souvenirs, qu’elle continue encore de rassembler des fans du monde entier, passionnés de “rétro gaming”.

Et lorsqu’un amoureux de la célèbre console décide de lui rendre hommage, il ne le fait pas à moitié. Aujourd’hui, on vous présente ArtsMD, un artiste qui sévit depuis peu sur Instagram. Depuis septembre, il poste ses créations en tout genre, sur le thème des jeux vidéos, jouables sur Game Boy, Mega Drive, Nintendo 64… bref toutes les consoles aujourd’hui iconique que vous avez peut-être usées durant votre enfance et adolescence.

D’abord, il composait des dioramas sous forme de tableaux, avec plusieurs niveaux de volumes. Après sont venus les marques pages créatifs et enfin, le sujet principal ici, ses cubes-dioramas qui représentent des scènes cultes de nos jeux préférés sur Game Boy. Des oeuvres pas si grandes, certes, mais qui requièrent beaucoup de rigueur et de précision.

ArtsMD met ses jolis cubes en vente sur son shop en ligne, sauf que récemment, tout son stock a vite été liquidé. Mais pas de panique : si vous voulez vous en procurer un exemplaire pour offrir à temps pour les fêtes, alors il suffit d’attendre une “nouvelle fournée” de ces petits chefs-d’oeuvre, vers le 1er décembre. Mais soyez attentifs, ça part vite !

Pour suivre son actualité, rendez-vous sur son compte Instagram. Et pour les fans de Game Boy qui préfèrent montrer leur passion à leurs pieds, on vous parle de ce fan de sneakers qui a créé une édition Air Jordan en hommage à la Nintendo Game Boy.

Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD
Crédits : ArtsMD

L’article Il encapsule des scènes de jeux vidéo dans des cubes pour rendre hommage aux consoles rétro est apparu en premier sur Creapills.

24 Nov 21:43

This room-sized LED egg captures amazing 3D models of the people inside it

by Devin Coldewey

Capturing human performances in high-definition 3D is a complicated proposition, and among the many challenges is getting the lighting right. This impressive new project from Google researchers puts the subject in the center of what can only be described as an prismatic LED egg, but the resulting 3D models are remarkable — and more importantly, relightable.

What’s called volumetric capture uses multiple cameras in a 360-degree setup to capture what can look like a photorealistic representation of a subject, including all the little details like clothing deformation, hair movement, and so on. It has two serious weaknesses: First, it’s more like a 3D movie than a model, since you can’t pose the person or change their attributes or clothing; The second is an extension of the first, in that you can’t change the way the person is lit — whatever lighting they had when you captured them, that’s what you get.

“The Relightables” is an attempt by a team at Google AI to address this second issue, since the first is pretty much baked in. Their system not only produces a highly detailed 3D model of a person in motion, but allows that model to be lit realistically by virtual light sources, making it possible to place it in games, movies, and other situations where lighting can change.

Images from the Google AI paper that show the capture process and resulting 3D model alone and in a lighted virtual environment.

It’s all thanks to the aforementioned prismatic egg (and a couple lines of code, of course). The egg is lined with 331 LED lights that can produce any color, and as the person is being captured, those LEDs shift in a special structured pattern that produces a lighting-agnostic model.

The resulting models can be placed in any virtual environment and will reflect not the lighting they were captured in but the lighting of that little world. The examples in the video below aren’t exactly Hollywood-level quality, but you can see the general idea of what they’re going for.

The limitations of volumetric capture make it unsuitable for many uses in film, but being relightable brings these performances a lot closer to ordinary 3D models than they were before. Of course, you still have to do all your acting inside a giant egg.

“The Relightables” will be presented by the team at SIGGRAPH Asia.

24 Nov 21:12

Updates from Coral: Mendel Linux 4.0 and much more!

by Google Developers
Posted by Carlos Mendonça (Product Manager), Coral TeamIllustration of the Coral Dev Board placed next to Fall foliage

Last month, we announced that Coral graduated out of beta, into a wider, global release. Today, we're announcing the next version of Mendel Linux (4.0 release Day) for the Coral Dev Board and SoM, as well as a number of other exciting updates.

We have made significant updates to improve performance and stability. Mendel Linux 4.0 release Day is based on Debian 10 Buster and includes upgraded GStreamer pipelines and support for Python 3.7, OpenCV, and OpenCL. The Linux kernel has also been updated to version 4.14 and U-Boot to version 2017.03.3.

We’ve also made it possible to use the Dev Board's GPU to convert YUV to RGB pixel data at up to 130 frames per second on 1080p resolution, which is one to two orders of magnitude faster than on Mendel Linux 3.0 release Chef. These changes make it possible to run inferences with YUV-producing sources such as cameras and hardware video decoders.

To upgrade your Dev Board or SoM, follow our guide to flash a new system image.

MediaPipe on Coral

MediaPipe is an open-source, cross-platform framework for building multi-modal machine learning perception pipelines that can process streaming data like video and audio. For example, you can use MediaPipe to run on-device machine learning models and process video from a camera to detect, track and visualize hand landmarks in real-time.

Developers and researchers can prototype their real-time perception use cases starting with the creation of the MediaPipe graph on desktop. Then they can quickly convert and deploy that same graph to the Coral Dev Board, where the quantized TensorFlow Lite model will be accelerated by the Edge TPU.

As part of this first release, MediaPipe is making available new experimental samples for both object and face detection, with support for the Coral Dev Board and SoM. The source code and instructions for compiling and running each sample are available on GitHub and on the MediaPipe documentation site.

New Teachable Sorter project tutorial

New Teachable Sorter project tutorial

A new Teachable Sorter tutorial is now available. The Teachable Sorter is a physical sorting machine that combines the Coral USB Accelerator's ability to perform very low latency inference with an ML model that can be trained to rapidly recognize and sort different objects as they fall through the air. It leverages Google’s new Teachable Machine 2.0, a web application that makes it easy for anyone to quickly train a model in a fun, hands-on way.

The tutorial walks through how to build the free-fall sorter, which separates marshmallows from cereal and can be trained using Teachable Machine.

Coral is now on TensorFlow Hub

Earlier this month, the TensorFlow team announced a new version of TensorFlow Hub, a central repository of pre-trained models. With this update, the interface has been improved with a fresh landing page and search experience. Pre-trained Coral models compiled for the Edge TPU continue to be available on our Coral site, but a select few are also now available from the TensorFlow Hub. On the site, you can find models featuring an Overlay interface, allowing you to test the model's performance against a custom set of images right from the browser. Check out the experience for MobileNet v1 and MobileNet v2.

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, visit the new Coral partnerships page. We hope you’ll use the new features offered on as a resource and encourage you to keep sending us feedback at

24 Nov 20:58

Mimic Artfully Employs LEDs In Fashion

by Lewin Day

Any science fiction piece set in the near-future involves clothes that light up or otherwise have some form of electronics inside. This hasn’t happened in mainstream fashion just yet, but [Amped Atelier] are doing serious work in the field. Mimic was their entry for the 2016 MakeFashion Gala, serving as a great example of LEDs in fashion done right.

Mimic consists of two pieces, designed as cocktail dresses that mimic their surroundings, in much the same way as a chameleon. LEDs are controlled by an Arduino, fitted with a colour sensor. When activated, the Arduino can change the color of the LEDs to match whatever is presented to the sender. This technology could serve as a great way to avoid clashing with a friend’s outfit, or to send a surreptitious signal to your ride that you’re ready to leave.

The LEDs are hidden beneath attractive geometric diffusers, which are 3D printed directly on to the fabric of the outfit. This gives an attractive, finished look to the garment, and allows the diffusers to naturally flow with the lines of the piece.

These pieces show that it’s possible to create glowable night wear that is as stylish as it is high tech. If you’re looking for something a little edgier however, we’ve got that too. Video after the break.

24 Nov 20:56

Medieval Armor Set

by staff

Bring chivalry back to the twenty-first century by protecting your family’s honor while dressed in the medieval armor set.  This wearable costume covers your entire head and torso in distinguished stainless steel plates to protect you in the heat of battle.

Check it out


23 Nov 20:18

Slush 2019 : beaucoup de mobilité et d’optimisme, un peu de diversité et d’éthique, et très peu de surprises

by Méta Media

Par Kati Bremme, Direction de l’Innovation et de la Prospective 

Plusieurs bonnes nouvelles pour l’écosystème tech européen : l’investissement dans les start-ups s’élève à 35 milliards de dollars cette année, l’Europe compte 99 start-ups d’une valeur de plus de $1 milliard, et les fonds de pension européens (4 milliards de milliards de dollars) commencent à s’intéresser au capital risque avec un triplement des investissements en 2019, annonce le rapport “State of European Tech” présenté à l’occasion de Slush Helsinki. S’ajoute à cela que le nombre de développeurs ne cesse de grandir en Europe (6 millions aujourd’hui), de quoi se rapprocher doucement de la concurrence chinoise et américaine. 

Mais Tom Wehmeier d’Atomico (à l’origine du rapport) n’annonce pas que des bonnes nouvelles :  92% des financements vont toujours à des équipes 100% masculines, la diversité tant mise en avant lors de l’édition précédente de Slush n’est pas vraiment implémentée “In Real Life”. Près de la moitié des fondatrices déclarent avoir subi des discriminations pendant les 12 derniers mois.

Décidément, il y a encore de la marge pour pousser à l’équilibre représentatif de la société dans le monde de l'innovation et de la tech européennes. 

Les années vingt du 21ème siècle seront peut-être la décennie qui crée les conditions optimales d’une collaboration des start-ups au niveau européen (imaginée entre autres par Lydia Jett du Softbank Vision Fund) et une meilleure compréhension des régulateurs, pour se distinguer de ses concurrents et pour empêcher l’exode des pépites de la tech vers la Silicon Valley ou à Shanghai.

En tout cas, c’est ce que veut croire la guilde des des geeks rassemblée à Helsinki pour assister au plus grand événement start-up à destination des investisseurs en Europe. L’ambiance est toujours aussi sombre, peut-être encore plus, car, moins illuminée par les smart-phones, qui sont désormais tous en dark mode, éco-responsabilité oblige. 25.000 visiteurs, 4.000 start-ups, 2.000 investisseurs et 350 chercheurs sont venus pour “passer les 48 heures les plus utiles de leur vie”. Voici quelques impressions, non exhaustives, du Grand Nord. 

La Chine a les données, les Etats-Unis l’argent, et l’Europe le “sens”.

Ce qui est mis en avant à Slush 2019 est l’impact sociétal, et l’utilité. Même les investisseurs sont 80% à s’intéresser à des idées avec un “impact sociétal/ environnemental à long terme”. Un fondateur européen sur cinq déclare que son entreprise mesure déjà son impact sociétal et/ou environnemental. Seulement 14% des fondateurs ne pensent pas que cela soit pertinent pour leur entreprise. Les investisseurs ont abondé d’argent les compagnies “avec un sens” avec plus de $4,4 Mrds cette année, fois 5 sur les 5 dernières années.  

Les fondatrices sont beaucoup plus sensibles à la mesure de l'impact de leur business. 

Je vois que nous nous dirigeons vers un double Internet potentiellement malsain : une toile ‘pauvre pour tous’ contre une toile de qualité supérieure, et je pense que les entreprises doivent davantage de créer des produits pour des segments socio-économiques plus larges de la société”, prévient Jessica Butcher, co-fondatrice de Tick.Done. 

Même si les revenus moyens des start-ups au niveau de l’amorçage restent en-dessous de ceux aux Etats-Unis (61K contre 107K), plusieurs avantages compétitifs de l’Europe par rapport aux US sont mis en avant : les faibles coûts de la vie et de salaire par rapport aux US.

L’Estonie notamment (pays d’origine de Skype) est définitivement une terrain fertile pour des start-ups scalables, exemple cette année avec Bolt, le concurrent d’Uber, qui devient le meilleur employeur tech en Europe de l’Est. Lui aussi, éco-responsable, avec 10 M de dollars investis pour une passage au “zéro carbone”. Marcus Villig, fondateur et PDG de Bolt, en dialogue avec John Collison, co-fondateur et Président of Stripe, souligne un autre avantage européen : les talents restent ici plus longtemps dans les entreprises, jusqu’à 5 ans, ce qui permet par exemple des investissements dans leur formation. 

Sid Kouider, fondateur et CEO de NextMind

Les deux jours étaient marquées par des retrouvailles dans le monde de la tech européennes sans grande annonce révolutionnaire. Quelques exclusivités, comme le lancement en première mondiale d’un wearable” piloté par le cerveau, n'ont pas vraiment marqué la neige fondue. Le lancement est prévu pour début 2020, mais pour l’instant, la démo de Sid Kouider, fondateur et CEO de NextMind ne semble pas convaincre le public, vu les maigres applaudissements et les estrades à moitié vides.

La veille, le Chief R&D Officer de Spotify avait rappellé qu'il ne faut pas abuser de la technique quand on n’en a pas besoin (ou quand elle n’est pas mûre). Selon lui, il faut avoir “un pretty big problem” pour justifier l’utilisation de l’IA (ou de toute autre technologie nouvelle). 

Les investissements dans l’avenir se trouvent clairement du côté de la Santé et de la Mobilité, de préférence “sustainable”.  Les showcases de Life sciences, Nightingale Health (qui permet l’analyse de plus de 200 biomarqueurs à partir d’une simple prise de sang), Solar Foods (de la nourriture fabriquée à partir de rien), Lumebot (un robot neige), eChargie (de l’électricité de partage), XShore (un bateau 100% électrique) attirent la foule, bien plus que les conférences et dessinent nos vies du futur. Le développement durable reste un sujet clé, cette année même le célèbre rapport d’Atomica n’est pas imprimé sur papier. En même temps, Boom Supersonic présente l’avion le plus rapide du monde, peut être pas forcément “zéro carbone”. Pour rester dans l’espace, Barbara Belvisi, la fondatrice de l’Interstellar Lab, explique pourquoi la technologie utilisée dans la stations spatiales pourrait bien aussi nous être utile sur terre (ou sur toute autre planète). 

Les compagnies plus "anciennes", comme Wolt (qui a grandi de 500 personnes en une année) et  Smartly partagent leurs bonnes pratiques pour passer de la phase start-up au scale-up, de la première idée à l’entrée en bourse de leur entreprise. Cameron Adams, co-fondateur de Canva (évaluée à plus de 3 Mrds), rappelle l’importance des ambassadeurs, et comment son entreprise a grandi organiquement grâce aux réseaux sociaux.

Thomas Plantega, fondateur de Vinted (qui préfère Vilnius à New York) et Ines Ures, CMO de Deliveroo, expliquent ce qu’il faut emporter dans sa valise quand on veut se lancer dans une expansion internationale.  Michael Moritz, Partner chez Sequoia Capital, et l’un des plus célèbres "Venture Capitalists" du monde (on compte parmi ses investissements Google, Yahoo!, PayPal, LinkedIn, et Stripe) révèle comment il détecte des pépites prometteuses. 

Les Gafas sont sur place, mais tout sauf ostentatoires. Ce sont ici plutôt des outils, on peut s’inscrire aux cours de Google et Amazon Cloud.

Derek Haoyang, fondateur de Squirrel AI, et ses deux enfants

Les Chinois sont présents avec Derek Haoyang Li qui montre Squirrel AI, la plateforme d’apprentissage en ligne qui veut révolutionner l’école, combinaison de Da Vinci, Einstein et Confucius, online et de plus en plus offline, à l’instar d’Alibaba et Tencent qui s’élargissent de plus en plus offline en investissant 20 milliards de dollars dans l’achat d’espaces physiques. Pendant ce temps, Kevin Lin, le co-fondateur de Twitch, attend la 5G avec impatience pour lancer le live streaming sur smartphone. Mikko Hyppönen, star en Finlande et créateur de la loi de son nom qui proclame que “plus un device est smart, plus il est vulnérable”, annonce que “bientôt n’importe quel idiot pourra faire du machine learning”. Sebastian Siemiatkowski de Klarna (un autre succès nordique) souligne l’importance des concepts centrés autour de l’utilisateur, y compris dans la Fintech : “It's not purely about payments. It's about using the data and create a REAL value and benefit for the customers and combine this with a great UX.” 

Mikko Hyppönen, photo Tanu Kallio

L’Aalto University a été fondée en 2010. Le Financial Times l’a appelée “national shake-up of higher education”, et en effet elle est là pour booster l’innovation et la création d’entreprises à l’intersection des sciences, des affaires, de la tech et des arts. AaltoES est entre autres à l’initiative de Slush. Une nouvelle contribution à l’”Open Education” est présentée cette année avec le cours en ligne “Starting Up”, co-créé par des étudiants avec la communauté start-up européenne pour enseigner les fondamentaux de l’entrepreneurship pour tous. 


Et une idée révolutionnaire pour le monde des start-ups : “No VC is the new VC”, avancée par Michele Romanow, co-fondatrice et Présidente de Clearbanc, à la recherche d’alternatives au seul business model aujourd’hui à disposition des pépites de la tech. Avec 60% des start-ups tech qui échouent dans les 3 premières années de leur existence, l’innovation continue est en effet le seul choix possible. Si possible, avec un peu plus de place pour les femmes…. 

Photo : Julius Konttinen

22 Nov 18:25

Facebook bascule sur Visual Studio Code en interne et participera à son développement

Dans un billet publié cette nuit, le réseau social fait un bref rappel de ses habitudes. Les développeurs peuvent en effet utiliser ce qu’ils veulent, mais beaucoup se servaient de Nuclide, outil interne basé sur Ato...
22 Nov 12:52

Google Assistant will now help you read to your kids from across the world

by Jay Peters
Image: Google

Google today launched a nice new Assistant feature called My Storytime that lets parents simulate reading to their kids when one parent is away from home. A parent will be able to record themselves reading chapters of stories, and the other parent (or babysitter) can ask Google Nest to read those recordings to the kids.

Google says that, once the feature is set up, the person at home just has to say “Hey Google, talk to My Storytime” to their Google Nest, and they will be able to pick the recording of the chapter they want to listen to with their children.

Just say “Hey Google, talk to My Storytime”

Recording a story takes a bit of initial setup, but it’s pretty easy to do. Visit the My Storytime website, log into your Google account...

Continue reading…

22 Nov 12:51

Correct Call-to-Action Recall by Users is Twice as High for Human Voices as Synthetic for Voice Apps

by Bret Kinsella

You may think that consumer preference for human over synthetic voices in voice apps is simply an aesthetic concern. Data from the new report What Consumers Want in Voice App Design, suggests it may impact much more than style., Pulse Labs, and collaborated to design a study that could dispell some of the mystery behind voice user experience (VUX) design by attributing data to design choices.

In one test, we played dialogue to a voice app user from two different synthetic voices using text-to-speech and one human voice. Each user only heard one of the examples. For the synthetic voices, we varied the length with a shorter clip measuring 25 seconds and the longer clip 49 seconds. The human voice actor used the script for the longer content and also recorded for 49 seconds.

We found that for a panel of 240 consumers, correct call-to-action recall by voice app users hearing a human voice was more than double that of either of the synthetic voices. The longer duration synthetic voice dialogue actually performed a little better than the shorter dialogue variant.

You can download the full report with 17 charts in 19 pages of analysis through the button below.


Brands Should Take Notice

Dylan Zwick, co-founder and chief product officer for Pulse Labs, immediately pointed out the importance of these findings for brands. “We didn’t know what to expect for this experiment and were surprised that using a human voice had such an impact on recall. That’s an important lesson for brands who want to make sure their voice products are remembered.”

As you dig deeper into the data you can see that only 26.8% of users that heard the long dialogue synthetic voice even noticed there was a call-to-action (CTA). That means nearly 3-out-of-4 users didn’t notice the CTA at all. It compares to 40.1% and 42.5% for the short dialogue synthetic voice and human voice respectively.

There may be an equivalent of banner blindness that arises in linear audio. We can call it CTA deafness. It is interesting that the longer synthetic voice seemed to suffer this phenomenon more acutely. It is also intriguing that even though nearly as many users who heard the short dialogue synthetic voice as the human voice recognized a CTA had been communicated, so many of them recalled it incorrectly.

“As synthetic voice technology improves, we’re hopeful about its capacity to make voice production scale more cost-effectively, but in the short term, you can’t beat a good old fashioned voice over talent.  Humans win again……for now,” said Brandon Kaplan, CEO of Skilled Creative, who also supported the study.



Follow @bretkinsella Follow @voicebotai

New Report Says Consumers Have 71% Preference for Human Over Robot Voices for Voice Assistant User Experience

The post Correct Call-to-Action Recall by Users is Twice as High for Human Voices as Synthetic for Voice Apps appeared first on

22 Nov 12:48

RealityTech explore les usages de la réalité augmentée spatiale

by Grégory Maubon

Depuis la participation remarquée de l’entreprise à l’édition 2018 de Laval Virtual (Prix Revolution), RealityTech a fortement développé ses produits de réalité augmentée spatiale. Pour en savoir plus avant d’aller les rencontrer sur leur stand à Virtuality, je vous invite…

Read more →

The post RealityTech explore les usages de la réalité augmentée spatiale first appeared on Réalité Augmentée - Augmented Reality.
22 Nov 09:10

Amazon reportedly plans bigger cashierless supermarkets for 2020

by James Vincent
Amazon Go supermarket Photo by Andrej Sokolow/picture alliance via Getty Images

Amazon is reportedly planning a big expansion for its cashierless store format in 2020.

According to Bloomberg, the retail giant wants to open both larger supermarkets and smaller pop-up stores as early as the first quarter of 2020, both using the same Amazon Go technology that creates a shopping experience without any checkout lines. Bloomberg notes that this expansion could include Amazon licensing its technology to rival retailers, some of which have been investing in cashierless tech of their own.

When Amazon first began developing the Go format, it planned to create large-scale cashierless supermarkets that stocked a wide range of goods. It eventually downsized these plans, and Go stores today tend to be smaller spaces (around...

Continue reading…

21 Nov 10:47

New Report Says Consumers Have 71% Preference for Human Over Robot Voices for Voice Assistant User Experience

by Bret Kinsella

Voicebot Research has found that not only do consumers prefer human voices over synthetic in their voice experiences, but the difference is 71.6% more. That is one of the findings in the newly published report What Consumers Want in Voice App Design. From what we can tell, no one has published results from a study looking to measure the preference between the two approaches. And, this was just one of the many questions that have remained unexplored in voice user experience.

However, the existence of actual data measuring preferences enables discussions about voice app design to suddenly become informed by more than mere opinion. “Smart marketers know that people prefer listening to people that sound like them. With this new research, it’s further evidence that the voice—a human quality full of emotion—is not easily replicated,” says David Ciccarelli, co-founder and CEO of “While there’s a time and place for synthetic voices to provide navigational prompts or brief instructions, communicating important messages with the intent to inform, educate, and inspire audiences should be left strictly to professional voice actors.”

Adding Data to the Voice UX Conversation

The rapid proliferation of voice assistants and voice user interfaces has led to debates about how to define a good voice user experience. Until today, those debates have been devoid of data and founded solely on individual opinions.

Voicebot would frequently receive questions such as “how long is optimal for a voice assistant response?” and “will using voice actors increase the user ratings for my Alexa skill?” Again, there were plenty of opinions such as “keep it short except when your experience needs a longer response” and “use a voice actor if you have time and budget.” Most voice app developers didn’t find these answers very helpful. We wondered why there was no research attempting to answer these basic questions, but could not locate any.

So, Voicebot collaborated with and Pulse Labs to see if we could design an empirical study that would put some data behind answering a series of questions. Along the way, we were also supported by Meredith Corporation and Skilled Creative that also expressed interest in whether there was some empirical guidance for voice experience optimization. The full report includes 17 charts in 19 pages of analysis and addresses questions such as:

  • Do voice assistant users prefer human voices over synthetic voices and if so, by how much?
  • Do users prefer male or female voices? Is that preference the same when considering human or synthetic voices?
  • How does the age of the user impact these preferences?
  • How long is too long when delivering content through a voice assistant?
  • Is there a difference in tolerance for the length of voice assistant content delivered by a human compared to a synthetic voice?

Applying the Results

Consumer preference for human voices also showed up clearly in voice favorability ratings by our panel of 249 testers. While 70.8% had a favorable view of human voices and only 12.5% unfavorable, ratings for synthetic voices were reversed with 12.3% favorable and 60.1% unfavorable.

“When designing for voice, there’s a great opportunity for brands and serious developers to determine and guide what their voice will be. What this work exemplifies is how important it is to take that voice seriously and to get it right, as it can make the difference between whether a consumer remembers what you’ve built or not,” said Dylan Zwick, Pulse Labs, co-founder and chief product officer.

Brandon Kaplan, CEO of Skilled Creative added, “While we weren’t surprised by the overall result, we were a bit surprised at the size of the discrepancy. It’s clear that if you’re building an application that speaks for longer than a few words, you should really look into using recorded voices. We would, however, recommend using synthetic voices during development and testing, and only adding recorded voices when you know you’ve got the interaction right, because once recordings are made it’s hard to make modifications.”

To learn more about consumer preferences by age and gender as well as their recall based on content length, and provenance and more, download the full report below. Also, let us know what you think on Twitter.



Follow @bretkinsella Follow @voicebotai

34 Percent of Marketers Expect to Have a Voice App by 2020, Alexa with Big Lead Over Google Assistant as Enthusiasm Runs High in New Report

More Than Half of Consumers Want to Use Voice Assistants for Healthcare – New Report from Voicebot and Orbita

The Top 44 Leaders in Voice for 2019

The post New Report Says Consumers Have 71% Preference for Human Over Robot Voices for Voice Assistant User Experience appeared first on

02 Nov 09:15

Watch these robot blocks cluster together with a hive mind

by Jay Peters

MIT researchers first showed off its self-assembling “M-Block” robot cubes in 2013 — and this week, they shared a video of what they’re calling M-Blocks 2.0 (via TechCrunch). Like the first version of the blocks, M-Blocks 2.0 move by generating momentum with an internal flywheel, and can climb on and around each other using magnets:

But these new blocks also have a “barcode-like” system on each block face that they can “read” to do things like follow a specific path:

And they can also act with a hive mind to find each other and cluster together:

Though this might seem terrifying, there are more benevolent expectations for the blocks right now. MIT News reports that the researchers envision...

Continue reading…

28 Oct 09:06

Actualité : La Joconde s'expose en réalité virtuelle au musée du Louvre

by Florian Agez
© HTC Vive Arts / Emissive Il fallait bien cela pour célébrer le 500e anniversaire de la mort de la plus célèbre figure de la Renaissance, peut-être même l'une des plus célèbres figures ayant jamais vécu. Ce jeudi 24 octobre, le Louvre a ouvert les portes d'une grande exposition consacrée à Léonard de Vinci. Elle...
28 Oct 06:52

Ce photographe a capturé les cosplays les plus créatifs du Comic-Con New-York 2019

by Mélanie D.

À New-York, au tout début du mois d’Octobre, a eu lieu l’édition 2019 de la célèbre convention Comic-Con : une sorte de festival sur plusieurs jours qui rassemble les passionnés de pop culture et de comics. À cette occasion, les passionnés ont donné le meilleur d’eux-mêmes pour créer des costumes insolites, inspirés de leur personnages préférés.

Voici le moment parfait pour les fans d’afficher leur passion en incarnant tout ce qu’ils veulent. Et le photographe Ali Reza Malik a décidé de les immortaliser en les capturant dans leurs plus beaux costumes. Au programme : des cosplays tous plus créatifs les uns que les autres !

Le site Bored Panda rapporte d’ailleurs, dans un article plus détaillé, les propos du photographe : “Les cosplayers sont le meilleur exemple de ce que les passionnés ont à offrir. Ils sont totalement dévoués à leur costume, en terme d’inventivité et de créativité, ils font preuve de beaucoup d’ingéniosité.” Ali continue : “J’ai aussi eu le plaisir de prendre en photo des cosplayers en hijab, qui ajustent leurs costumes en conséquence : tout est dans les détails. C’est impressionnant.”

En ce moment même, les amateurs de BD et science-fiction affluent vers Paris, pour la version française du festival américain, qui accueille durant 3 jours des créateurs, des artistes, des acteurs et d’autres événements pour les fans. Cette année, la Comic-Con se déroule à Porte de la Villette, et a déjà commencé puisqu’elle dure du vendredi 25 au dimanche 27 octobre. Alors si vous êtes motivés, on espère que ces photos vous inspireront pour un cosplay fait-maison !

Mario, style Borderlands (version ombrage de celluloïd)

Crédits : Ali Reza Malik

Aang et Avatar Korra (Avatar, le dernier maître de l’air)

Crédits : Ali Reza Malik

Yondu Udonta (Les Gardiens de la Galaxie)

Crédits : Ali Reza Malik

Le capitaine Jack Sparrow

Crédits : Ali Reza Malik

Lara Croft (Tomb Raider)

Crédits : Ali Reza Malik

Cruella De Vil (Les 101 Dalmatiens)

Crédits : Ali Reza Malik


Crédits : Ali Reza Malik

La Princesse Zelda

Crédits : Ali Reza Malik

Scarlet Witch

Crédits : Ali Reza Malik

Luigi, avec un plateau de Ouija (Luiga)

Crédits : Ali Reza Malik


Crédits : Ali Reza Malik

Captain America

Crédits : Ali Reza Malik

Le Clown Pennywise (Ça)

Crédits : Ali Reza Malik


Crédits : Ali Reza Malik

Jane Foster et Loki

Crédits : Ali Reza Malik

Les Maskass

Crédits : Ali Reza Malik

Venom Bowsette

Crédits : Ali Reza Malik


Crédits : Ali Reza Malik

Tornade (X-Men)

Crédits : Ali Reza Malik

Mario et Luigi, version Samouraïs

Crédits : Ali Reza Malik

Piotr Rasputin (Colossus)

Crédits : Ali Reza Malik

Groot, Gamora et Yondu (Les Gardiens de la Galaxie)

Crédits : Ali Reza Malik

Aladdin, Jasmine et Jafar

Crédits : Ali Reza Malik

Rey (Star Wars)

Crédits : Ali Reza Malik

Lui (Les Super-Nanas)

Crédits : Ali Reza Malik

Katana et le Joker

Crédits : Ali Reza Malik

Leonardo (Les Tortues Ninja)

Crédits : Ali Reza Malik

Norbert Dragonneau (Les Animaux Fantastiques)

Crédits : Ali Reza Malik

Samurai Gengi

Crédits : Ali Reza Malik

La Propriétaire (Crazy Kung-Fu)

Crédits : Ali Reza Malik

L’article Ce photographe a capturé les cosplays les plus créatifs du Comic-Con New-York 2019 est apparu en premier sur Creapills.

23 Oct 21:18

Jim Meskimen : un deepfake impressionnant pour imiter 20 célébrités

by GOLEM13

L’acteur et imitateur Jim Meskimen récite le poème «Pity the Poor Impressionist» avec la voix et la tête de 20 célébrités. En commençant sa vidéo par une imitation de John Malkovich et terminant par Robin Williams, Jim aidé par SHAM00K, a utilisé la technologie du deepfake qui fait actuellement polémique. En effet, cette dernière s’est illustrée en juin dernier avec la vidéo truquée d’un discours de Mark Zuckerberg sur Instagram. La vidéo hyperéaliste dans laquelle il tenait des propos inquiétants sur nos données personnelle, avait fait le tour du monde.

Plus récemment, c’est Donald Trump qui annonçait la fin du SIDA dans une vidéo deepfake imaginée par Solidarité Sida. Dans ce nouvel exemple impressionnant Jim Meskimen se métamorphose en une vingtaine d’acteurs dans un morphing assez troublant de réalisme.

Schwarzenegger, De Niro, Robin Williams…

L’article Jim Meskimen : un deepfake impressionnant pour imiter 20 célébrités est apparu en premier sur GOLEM13.FR.

22 Oct 22:47

After 40 Years, This Classic Out-of-Print Dune Game Is Being Re-Released

by Futurism Creative
Dune Game

Dune is one of the best-selling science fiction novels of all time. It’s spawned an underrated 80s movie starring Kyle MacLachlan, a popular TV miniseries, and an upcoming feature film remake starring Timothée Chalamet and a galaxy of other well-known stars. And if you’re a fan of strategy board games, you’re probably already aware that there is a killer Dune game that, unfortunately, has been out of print for years. However, Dune: A Game of Conquest, Diplomacy & Betrayal is now being re-released courtesy of the folks at Gale Force Nine.

First released in 1979, Dune the board game is set in the universe created by the late Frank Herbert and lets players shape the destiny of their own “noble family, guild, or religious order on a barren planet which is the only source for the most valuable substance in the known universe.”

The Dune game, which much like the 1984 feature film has been a cultishly adored item for decades, gives players the opportunity to do no less than replicate the unique world created by Herbert for his epic saga.

Dune: A Game of Conquest, Diplomacy & Betrayal

THe classic Dune game is back.
Gale Force Nine

The Dune board game has an intricate and involved gameplay process that’s just as detailed and rich as the novels it’s based on. It allows each player to represent the leader of their choice from six factions, all of which are jockeying to control melange, the mysterious spice that serves as a kind of universal MacGuffin within the Dune world. One Duke Leto Atredies describes melange like so:

“All fades before melange. A handful of spice will buy a home on Tupile. It cannot be manufactured, it must be mined on Arrakis. It is unique and it has true geriatric properties.”

The spice is also crucial to the all-important space travel in Dune’s reality, as its psychoactive properties give humans the mental endurance to survive hyperspace voyages without cracking up. The game lets you pick a character from the book, and wage war and politics to control as much of this precious spice as possible!

The Dune board game is set to come out on October 25th, a little over a year in advance of the upcoming feature film remake scheduled for a 2020 release. But it’s in high demand, and over on Amazon, newly ordered copies of the game aren’t scheduled to ship until November 14th at the earliest. So if there’s a Spice monger in your life who you want to take care of for the holiday season, you better order now.

Futurism fans: To create this content, a non-editorial team worked with an affiliate partner. We may collect a small commission on items purchased through this page. This post does not necessarily reflect the views or the endorsement of the editorial staff.

The post After 40 Years, This Classic Out-of-Print Dune Game Is Being Re-Released appeared first on Futurism.

22 Oct 05:50

Alibaba officially launches 2019 11.11 Global Shopping Festival

by Scott Thompson

Alibaba Group has kicked off the 2019 11.11 Global Shopping Festival and promised that the event will be “bigger than ever in both scale and reach”.

More than 200,000 brands will participate in what is the 11th 11.11, 22,000 of which will be from 78 overseas markets. The number of new products available stands at one million, and more than 500 million consumers are expected to shop during the 24 hour period – about 100 million more than last year. 

Alibaba plans to distribute over two billion red packets, or digital cash vouchers, while hosting 2,000+ key opinion leaders on its platforms via livestreaming to help brands showcase products and boost engagement with consumers.

“Our goal is to stimulate consumption demand and support lifestyle upgrade in China through new brands and products,” says Tmall and Taobao President Jiang Fan.

“We will enable merchants in China and around the world to grow their businesses through data-driven product innovation and consumer insights, as well as leverage our recommendation technology and content-driven user engagement to delight consumers in urban coastal cities and less developed areas of China.”

Sign up for our free retail technology newsletter here.