Shared posts

28 Oct 15:00

MIT researchers used Wi-Fi to recognize people through walls

by Lizzie Plaugic

Researchers at MIT's Computer Science and Artificial Intelligence Lab have developed software that uses variations in Wi-Fi signals to recognize human silhouettes through walls. The researchers built a device, called RF-Capture, that transmits wireless signals and then analyzes the reflections of those signals to piece together a human form, according to a study published this morning.

Continue reading…

28 Oct 13:38

Votre iPhone 6s peut maintenant remplacer votre balance de cuisine

by Yohann Poiron

La technologie 3D Touch introduite sur l’iPhone 6s et l’iPhone 6s Plus ouvre un certain nombre d’astucieux cas d’utilisation, tels que la possibilité de jeter un regard à des e-mails sans les ouvrir, dessiner des images dynamiques avec votre doigt, et peser les prunes. Oui, vous avez bien lu. Ce dernier cas d’utilisation est rendu possible grâce à un développeur qui vit à Paris.

Dans un article sur son blog, Simon Gladman parle de sa nouvelle application, qui est nommée Plum-O-Meter. Comme son nom l’indique, l’application tire profit de la technologie 3D Touch dans son iPhone 6s afin de servir de balance, qui indique à l’utilisateur le poids des objets placés sur l’écran du smartphone.

L’exemple dans la vidéo de Gladman montre trois prunes alignées et placées sur l’écran de son iPhone 6. L’application Plum-O-Meter affiche la force de chaque fruit sur l’écran sous la forme de pourcentage, et utilise une surbrillance jaunâtre pour indiquer quel objet est le plus lourd. L’application est open source, et peut être sideloadeder sur un iPhone 6s ou un iPhone 6s Plus sans avoir besoin de jailbreaker votre iDevice.

Votre iPhone 6s peut maintenant remplacer votre balance de cuisine

Techniquement, l’écran multitouch de l’iPhone peut simultanément détecter jusqu’à cinq objets à la fois, souligne iDownloadBlog.

“À l’origine, j’ai conçu cette application pour les raisins, mais ils sont trop légers pour activer la technologie 3D Touch”, a écrit Gladman sur son blog. Quoique vous décidez de peser sur un écran d’iPhone 6s ou 6s Plus, cela peut être une bonne idée de bien nettoyer l’écran avant, ou de nettoyer les objets justes après.

Une vraie balance connectée !

Pour vos besoins culinaires, Stoo Sepp s’appuie sur le code de FlexMonkey, pour améliorer la notion de balance comme vous pouvez le voir dans la vidéo ci-dessous.

Dans les deux cas, l’iPhone affiche des chiffres légèrement différents selon l’endroit où le fruit a été placé, donc ne vous attendez pas à remplacer votre balance de cuisine par un iPhone 6s – mais il est possible que quelqu’un puisse affiner le code pour tenir compte des écarts.

Alors, prêt à remplacer votre balance ?

Flattr this!

28 Oct 12:51

Now You Can Design Your Own Raspberry Pi

by Natasha Lomas
Raspberry Pi Hardware startups building atop the Raspberry Pi microprocessor — of which there are plenty already — can now order custom tweaks to the hardware to better tailor the Pi to fit the needs of their business. Read More
28 Oct 09:08

Security Camera Shaped Like a Bird

by Donnia
Jean-Philippe Encausse

Sympa mais 1an ... LOL il n'y a aucun projet de prêt

Ulo est une caméra de surveillance ayant la forme d’une petite chouette, conçue par le jeune designer français Vivien Muller et lancée à travers une campagne Kickstarter. Cette caméra fonctionne grâce au wi-fi et possède un regard qui interagit avec les utilisateurs et le tactile, créant ainsi un nouveau mode de communication objet-humain, à la manière d’un animal de compagnie.

ulo-6 ulo-5 ulo-4 ulo-3 ulo-1 ulo-0
28 Oct 08:53

IKEA transforme les dessins d'enfants en véritables peluches

by François Castro Lara
IKEA transforme les dessins d'enfants en véritables peluches

Aux États-Unis, IKEA transforme les dessins d'enfants en véritables peluches.

Lire la suite sur Creapills.com

26 Oct 14:19

What a Deep Neural Network thinks about your #selfie

Convolutional Neural Networks are great: they recognize things, places and people in your personal photos, signs, people and lights in self-driving cars, crops, forests and traffic in aerial imagery, various anomalies in medical images and all kinds of other useful things. But once in a while these powerful visual recognition models can also be warped for distraction, fun and amusement. In this fun experiment we’re going to do just that: We’ll take a powerful, 140-million-parameter state-of-the-art Convolutional Neural Network, feed it 2 million selfies from the internet, and train it to classify good selfies from bad ones. Just because it’s easy and because we can. And in the process we might learn how to take better selfies :)

Yeah, I’ll do real work. But first, let me tag a #selfie.

Convolutional Neural Networks

Before we dive in I thought I should briefly describe what Convolutional Neural Networks (or ConvNets for short) are in case a slightly more general audience reader stumbles by. Basically, ConvNets are a very powerful hammer, and Computer Vision problems are very nails. If you’re seeing or reading anything about a computer recognizing things in images or videos, in 2015 it almost certainly involves a ConvNet. Some examples:

Few of many examples of ConvNets being useful. From top left and clockwise: Classifying house numbers in Street View images, recognizing bad things in medical images, recognizing Chinese characters, traffic signs, and faces.

A bit of history. ConvNets happen to have an interesting background story. They were first developed by Yann LeCun et al. in 1980’s (building on some earlier work, e.g. from Fukushima). As a fun early example see this demonstration of LeNet 1 (that was the ConvNet’s name) recognizing digits back in 1993. However, these models remained mostly ignored by the Computer Vision community because it was thought that they would not scale to “real-world” images. That turned out to be only true until about 2012, when we finally had enough compute (in form of GPUs specifically, thanks NVIDIA) and enough data (thanks ImageNet) to actually scale these models, as was first demonstrated when Alex Krizhevsky, Ilya Sutskever and Geoff Hinton won the 2012 ImageNet challenge (think: The World Cup of Computer Vision), crushing their competition (16.4% error vs. 26.2% of the second best entry).

I happened to witness this critical juncture in time first hand because the ImageNet challenge was over the last few years organized by Fei-Fei Li’s lab (my lab), so I remember when my labmate gasped in disbelief as she noticed the (very strong) ConvNet submission come up in the submission logs. And I remember us pacing around the room trying to digest what had just happened. In the next few months ConvNets went from obscure models that were shrouded in skepticism to rockstars of Computer Vision, present as a core building block in almost every new Computer Vision paper. The ImageNet challenge reflects this trend - In the 2012 ImageNet challenge there was only one ConvNet entry, and since then in 2013 and 2014 almost all entries used ConvNets. Also, fun fact, the winning team each year immediately incorporated into a company.

Over the next few years we had perfected, simplified, and scaled up the original 2012 “AlexNet” architecture (yes, we give them names). In 2013 there was the “ZFNet”, and then in 2014 the “GoogLeNet” (get it? Because it’s like LeNet but from Google? hah) and “VGGNet”. Anyway, what we know now is that ConvNets are:

  • simple: one operation is repeated over and over few tens of times starting with the raw image.
  • fast, processing an image in few tens of milliseconds
  • they work very well (e.g. see this post where I struggle to classify images better than the GoogLeNet)
  • and by the way, in some ways they seem to work similar to our own visual cortex (see e.g. this paper)

Under the hood

So how do they work? When you peek under the hood you’ll find a very simple computational motif repeated over and over. The gif below illustrates the full computational process of a small ConvNet:

Illustration of the inference process.

On the left we feed in the raw image pixels, which we represent as a 3-dimensional grid of numbers. For example, a 256x256 image would be represented as a 256x256x3 array (last 3 for red, green, blue). We then perform convolutions, which is a fancy way of saying that we take small filters and slide them over the image spatially. Different filters get excited over different features in the image: some might respond strongly when they see a small horizontal edge, some might respond around regions of red color, etc. If we suppose that we had 10 filters, in this way we would transform the original (256,256,3) image to a (256,256,10) “image”, where we’ve thrown away the original image information and only keep the 10 responses of our filters at every position in the image. It’s as if the three color channels (red, green, blue) were now replaced with 10 filter response channels (I’m showing these along the first column immediately on the right of the image in the gif above).

Now, I explained the first column of activations right after the image, so what’s with all the other columns that appear over time? They are the exact same operation repeated over and over, once to get each new column. The next columns will correspond to yet another set of filters being applied to the previous column’s responses, gradually detecting more and more complex visual patterns until the last set of filters is computing the probability of entire visual classes (e.g. dog/toad) in the image. Clearly, I’m skimming over some parts but that’s the basic gist: it’s just convolutions from start to end.

Training. We’ve seen that a ConvNet is a large collection of filters that are applied on top of each other. But how do we know what the filters should be looking for? We don’t - we initialize them all randomly and then train them over time. For example, we feed an image to a ConvNet with random filters and it might say that it’s 54% sure that’s a dog. Then we can tell it that it’s in fact a toad, and there is a mathematical process for changing all filters in the ConvNet a tiny amount so as to make it slightly more likely to say toad the next time it sees that same image. Then we just repeat this process tens/hundreds of millions of times, for millions of images. Automagically, different filters along the computational pathway in the ConvNet will gradually tune themselves to respond to important things in the images, such as eyes, then heads, then entire bodies etc.

Examples of what 12 randomly chosen filters in a trained ConvNet get excited about, borrowed from Matthew Zeiler's Visualizing and Understanding Convolutional Networks. Filters shown here are in the 3rd stage of processing and seem to look for honey-comb like patterns, or wheels/torsos/text, etc. Again, we don't specify this; It emerges by itself and we can inspect it.

Another nice set of visualizations for a fully trained ConvNet can be found in Jason Yosinski et al. project deepvis. It includes a fun live demo of a ConvNet running in real time on your computer’s camera, as explained nicely by Jason in this video:

In summary, the whole training process resembles showing a child many images of things, and him/her having to gradually figure out what to look for in the images to tell those things apart. Or if you prefer your explanations technical, then ConvNet is just expressing a function from image pixels to class probabilities with the filters as parameters, and we run stochastic gradient descent to optimize a classification loss function. Or if you’re into AI/brain/singularity hype then the function is a “deep neural network”, the filters are neurons, and the full ConvNet is a piece of adaptive, simulated visual cortical tissue.

Training a ConvNet

The nice thing about ConvNets is that you can feed them images of whatever you like (along with some labels) and they will learn to recognize those labels. In our case we will feed a ConvNet some good and bad selfies, and it will automagically find the best things to look for in the images to tell those two classes apart. So lets grab some selfies:

  1. I wrote a quick script to gather images tagged with #selfie. I ended up getting about 5 million images (with ConvNets it’s the more the better, always).
  2. I narrowed that down with another ConvNet to about 2 million images that contain at least one face.
  3. Now it is time to decide which ones of those selfies are good or bad. Intuitively, we want to calculate a proxy for how many people have seen the selfie, and then look at the number of likes as a function of the audience size. I took all the users and sorted them by their number of followers. I gave a small bonus for each additional tag on the image, assuming that extra tags bring more eyes. Then I marched down this sorted list in groups of 100, and sorted those 100 selfies based on their number of likes. I only used selfies that were online for more than a month to ensure a near-stable like count. I took the top 50 selfies and assigned them as positive selfies, and I took the bottom 50 and assigned those to negatives. We therefore end up with a binary split of the data into two halves, where we tried to normalize by the number of people who have probably seen each selfie. In this process I also filtered people with too few followers or too many followers, and also people who used too many tags on the image.
  4. Take the resulting dataset of 1 million good and 1 million bad selfies and train a ConvNet.

At this point you may object that the way I’m deciding if a selfie is good or bad is wrong - e.g. what if someone posted a very good selfie but it was late at night, so perhaps not as many people saw it and it got less likes? You’re right - It almost definitely is wrong, but it only has to be right more often that not and the ConvNet will manage. It does not get confused or discouraged, it just does its best with what it’s been given. To get an idea about how difficult it is to distinguish the two classes in our data, have a look at some example training images below. If I gave you any one of these images could you tell which category it belongs to?

Example images showing good and bad selfies in our training data. These will be given to the ConvNet as teaching material.

Training details. Just to throw out some technical details, I used Caffe to train the ConvNet. I used a VGGNet pretrained on ImageNet, and finetuned it on the selfie dataset. The model trained overnight on an NVIDIA K40 GPU. I disabled dropout because I had better results without it. I also tried a VGGNet pretrained on a dataset with faces but did not obtain better results than starting from an ImageNet checkpoint. The final model had 60% accuracy on my validation data split (50% is guessing randomly).

What makes a good #selfie ?

Okay, so we collected 2 million selfies, decided which ones are probably good or bad based on the number of likes they received (controlling for the number of followers), fed all of it to Caffe and trained a ConvNet. The ConvNet “looked” at every one of the 2 million selfies several tens of times, and tuned its filters in a way that best allows it to separate good selfies from bad ones. We can’t very easily inspect exactly what it found (it’s all jumbled up in 140 million numbers that together define the filters). However, we can set it loose on selfies that it has never seen before and try to understand what it’s doing by looking at which images it likes and which ones it does not.

I took 50,000 selfies from my test data (i.e. the ConvNet hasn’t seen these before). As a first visualization, in the image below I am showing a continuum visualization, with the best selfies on the top row, the worst selfies on the bottom row, and every row in between is a continuum:

A continuum from best (top) to worst (bottom) selfies, as judged by the ConvNet.

That was interesting. Lets now pull up the top 100 selfies (out of 50,000), according to the ConvNet:

Best 100 out of 50,000 selfies, as judged by the Convolutional Neural Network.

If you’d like to see more here is a link to top 1000 selfies (3.5MB). Are you noticing a pattern in what the ConvNet has likely learned to look for? A few patterns stand out for me, and if you notice anything else I’d be happy to hear about in the comments. To take a good selfie, Do:

  • Be female. Women are consistently ranked higher than men. In particular, notice that there is not a single guy in the top 100.
  • Face should occupy about 1/3 of the image. Notice that the position and pose of the face is quite consistent among the top images. The face always occupies about 1/3 of the image, is slightly tilted, and is positioned in the center and at the top. Which also brings me to:
  • Cut off your forehead. What’s up with that? It looks like a popular strategy, at least for women.
  • Show your long hair. Notice the frequent prominence of long strands of hair running down the shoulders.
  • Oversaturate the face. Notice the frequent occurrence of over-saturated lighting, which often makes the face look much more uniform and faded out. Related to that,
  • Put a filter on it. Black and White photos seem to do quite well, and most of the top images seem to contain some kind of a filter that fades out the image and decreases the contrast.
  • Add a border. You will notice a frequent appearance of horizontal/vertical white borders.

Interestingly, not all of these rules apply to males. I manually went through the top 2000 selfies and picked out the top males, here’s what we get:

Best few male selfies taken from the top 2,000 selfies.

In this case we see don’t see any cut off foreheads. Instead, most selfies seem to be a slightly broader shot with head fully in the picture, and shoulders visible. It also looks like many of them have a fancy hair style with slightly longer hair combed upwards. However, we still do see the prominance of faded facial features.

Lets also look at some of the worst selfies, which the ConvNet is quite certain would not receive a lot of likes. I am showing the images in a much smaller and less identifiable format because my intention is for us to learn about the broad patterns that decrease the selfie’s quality, not to shine light on people who happened to take a bad selfie. Here they are:

Worst 300 out of 50,000 selfies, as judged by the Convolutional Neural Network.

Even at this small resolution some patterns clearly emerge. Don’t:

  • Take selfies in low lighting. Very consistently, darker photos (which usually include much more noise as well) are ranked very low by the ConvNet.
  • Frame your head too large. Presumably no one wants to see such an up-close view.
  • Take group shots. It’s fun to take selfies with your friends but this seems to not work very well. Keep it simple and take up all the space yourself. But not too much space.

As a last point, note that a good portion of the variability between what makes a good or bad selfies can be explained by the style of the image, as opposed to the raw attractiveness of the person. Also, with some relief, it seems that the best selfies do not seem to be the ones that show the most skin. I was quite concerned for a moment there that my fancy 140-million ConvNet would turn out to be a simple amount-of-skin-texture-counter.

Celebrities. As a last fun experiment, I tried to run the ConvNet on a few famous celebrity selfies, and sorted the results with the continuum visualization, where the best selfies are on the top and the ConvNet score decreases to the right and then towards the bottom:

Celebrity selfies as judged by a Convolutional Neural Network. Most attractive selfies: Top left, then deceasing in quality first to the right then towards the bottom. Right click > Open Image in new tab on this image to see it in higher resolution.

Amusingly, note that the general rule of thumb we observed before (no group photos) is broken with the famous group selfie of Ellen DeGeneres and others from the Oscars, yet the ConvNet thinks this is actually a very good selfie, placing it on the 2nd row! Nice! :)

Another one of our rules of thumb (no males) is confidently defied by Chris Pratt’s body (also 2nd row), and honorable mentions go to Justin Beiber’s raised eyebrows and Stephen Collbert / Jimmy Fallon duo (3rd row). James Franco’s selfie shows quite a lot more skin than Chris’, but the ConvNet is not very impressed (4th row). Neither was I.

Lastly, notice again the importance of style. There are several uncontroversially-good-looking people who still appear on the bottom of the list, due to bad framing (e.g. head too large possibly for J Lo), bad lighting, etc.

Exploring the #selfie space

Another fun visualization we can try is to lay out the selfies with t-SNE. t-SNE is a wonderful algorithm that I like to run on nearly anything I can because it’s both very general and very effective - it takes some number of things (e.g. images in our case) and lays them out in such way that nearby things are similar. You can in fact lay out many things with t-SNE, such as Netflix movies, words, Twitter profiles, ImageNet images, or really anything where you have some number of things and a way of comparing how similar two things are. In our case we will lay out selfies based on how similar the ConvNet perceives them. In technical terms, we are doing this based on L2 norms of the fc7 activations in the last fully-connected layer. Here is the visualization:

Selfie t-SNE visualization. Here is a link to a higher-resolution version. (9MB)

You can see that selfies cluster in some fun ways: we have group selfies on top left, a cluster of selfies with sunglasses/glasses in middle left, closeups bottom left, a lot of mirror full-body shots top right, etc. Well, I guess that was kind of fun.

Finding the Optimal Crop for a selfie

Another fun experiment we can run is to use the ConvNet to automatically find the best selfie crops. That is, we will take an image, randomly try out many different possible crops and then select the one that the ConvNet thinks looks best. Below are four examples of the process, where I show the original selfies on the left, and the ConvNet-cropped selfies on the right:

Each of the four pairs shows the original image (left) and the crop that was selected by the ConvNet as looking best (right). </a>

Notice that the ConvNet likes to make the head take up about 1/3 of the image, and chops off the forehead. Amusingly, in the image on the bottom right the ConvNet decided to get rid of the “self” part of selfie, entirely missing the point :) You can find many more fun examples of these “rude” crops:

Same visualization as above, with originals on left and best crops on right. The one on the right is my favorite.</a>

Before any of the more advanced users ask: Yes, I did try to insert a Spatial Transformer layer right after the image and before the ConvNet. Then I backpropped into the 6 parameters that define an arbitrary affine crop. Unfortunately I could not get this to work well - the optimization would sometimes get stuck, or drift around somewhat randomly. I also tried constraining the transform to scale/translation but this did not help. Luckily, when your transform has 3 bounded parameters then we can afford to perform global search (as seen above).

How good is yours?

Curious about what the network thinks of your selfies? I’ve packaged the network into a Twitter bot so that you can easily find out. (The bot turns out to be onyl ~150 lines of Python, including all Caffe/Tweepy code). Attach your image to a tweet (or include a link) and mention the bot @deepselfie anywhere in the tweet. The bot will take a look at your selfie and then pitch in with its opinion! For best results link to a square image, otherwise the bot will have to squish it to a square, which deteriorates the results. The bot should reply within a minute or something went wrong (try again later).

Example interaction with the Selfie Bot (@deepselfie).

Before anyone asks, I also tried to port a smaller version of this ConvNet to run on iOS so you could enjoy real-time feedback while taking your selfies, but this turned out to be quite involved for a quick side project - e.g. I first tried to write my own fragment shaders since there is no CUDA-like support, then looked at some threaded CPU-only versions, but I couldn’t get it to work nicely and in real time. And I do have real work to do.

Conclusion

I hope I’ve given you a taste of how powerful Convolutional Neural Networks are. You give them example images with some labels, they learn to recognize those things automatically, and it all works very well and is very fast (at least at test time, once it’s trained). Of course, we’ve only barely scratched the surface - ConvNets are used as a basic building block in many Neural Networks, not just to classify images/videos but also to segment, detect, and describe, both in the cloud or in robots.

If you’d liked to learn more, the best place to start for a beginner right now is probably Michael Nielsen’s tutorials. From there I would encourage you to first look at Andrew Ng’s Coursera class, and then next I would go through course notes/assignments for CS231n. This is a class specifically on ConvNets that I taught together with Fei-Fei at Stanford last Winter quarter. We will also be offering the class again starting January 2016 and you’re free to follow along. For more advanced material I would look into Hugo Larochelle’s Neural Networks class or the Deep Learning book currently being written by Yoshua Bengio, Ian Goodfellow and Aaron Courville.

Of course you’ll learn much more by doing than by reading, so I’d recommend that you play with 101 Kaggle Challenges, or that you develop your own side projects, in which case I warmly recommend that you not only do but also write about it, and post it places for all of us to read, for example on /r/machinelearning which has accumulated a nice community. As for recommended tools, the three common options right now are:

  • Caffe (C++, Python/Matlab wrappers), which I used in this post. If you’re looking to do basic Image Classification then Caffe is the easiest way to go, in many cases requiring you to write no code, just invoking included scripts.
  • Theano-based Deep Learning libraries (Python) such as Keras or Lasagne, which allow more flexibility.
  • Torch (C++, Lua), which is what I currently use in my research. I’d recommend Torch for the most advanced users, as it offers a lot of freedom, flexibility, speed, all with quite simple abstractions.

Some other slightly newer/less proven but promising libraries include Nervana’s Neon, CGT, or Mocha in Julia.

Lastly, there are a few companies out there who aspire to bring Deep Learning to the masses. One example is MetaMind, who offer web interface that allows you to drag and drop images and train a ConvNet (they handle all of the details in the cloud). MetaMind and Clarifai also offer ConvNet REST APIs.

That’s it, see you next time!

26 Oct 14:14

Niptech Explore – L’analyse prédictive

by ben

Si l’être humain a souvent peur de l’inconnu et du lendemain, c’est encore plus vrai dans le domaine des affaires où l’on aime optimiser et réduire la part d’incertitude. De la gestion des stocks pour la grande distribution à la prévention des fraudes pour les organismes de cartes de crédit, les sciences et les mathématiques apportent leur aide pour anticiper ce qui va arriver. Souvent invisible pour le consommateur, ce domaine est en plein développement.

Selon Karim Bensaci, de la société Calyps, « L’analyse prédictive consiste à permettre d’anticiper quelque chose qui aurait un fort potentiel d’arriver, suffisamment tôt pour qu’on ait le temps de se préparer. » Pour cela, l’importance des données – autant en provenance du passé que du présent – est capitale, et la digitalisation de nos sociétés contribue à les fournir grâce aux capteurs, smartphones, objets connectés ou réseaux sociaux notamment. La quantité de données explose : on estime que les 90% des données disponibles aujourd’hui ont été récoltées ces deux dernières années seulement. Mais au-delà des chiffres, c’est l’interprétation humaine qui intervient dans l’ultime phase de l’analyse prédictive.


Podcast: Téléchargement

Les applications s’étendent aujourd’hui à tous les domaines, de l’environnement au médical, de la mobilité à la politique. Les prévisions météorologiques, sont un exemple d’analyse prédictive connu de tous. Dans plusieurs villes aux USA, le projet PredPol a permis de réduire le nombre de délits de près de 30%, en prévoyant avec efficacité les endroits où envoyer les patrouilles de police. Des opérateurs téléphoniques anticipent le moment où un client risque de partir chez un concurrent, et entre en contact avec lui pour lui offrir un avantage commercial. Dans le domaine de la mobilité, des villes font appel à l’analyse prédictive pour élaborer des scénarios de développement des transports publics et de son urbanisation. Walmart, pour l’un de ses magasins, approvisionne ses stocks en fonction des préférences émises sur les réseaux sociaux dans la région.

Anticipation, optimisation… l’analyse prédictive et le big data soulèvent également des questions politiques et éthiques évidentes. « Notre civilisation est en train de vivre une révolution dont elle n’a peut-être pas conscience » conclu Karim Bensaci.

 

26 Oct 13:55

Portable Pizza Pouch

by Staff
Jean-Philippe Encausse

Pour les Meetup !

Quell your pizza craving by keeping a delicious slice handy using this portable pizza pouch. The triangular plastic pouch features a convenient strap so you can use it like a necklace and a sturdy zip lock seal to ensure the grease stays in the slice stays fresh.

Check it out

$8.00

22 Oct 13:51

Kinetic Battery Startup Ampy Raises Seed To Shrink To Fit Wearables

by Natasha Lomas
Ampy Kinetic charging battery startup Ampy — which makes a wearable spare battery pack charged by human movement — has closed an $875,000 seed round led by Clean Energy Trust and NewGen Ventures. The Chicago-based startup says it will be using the new funding to work on shrinking its tech to fit wearable devices such as smartwatches and fitness trackers, with the aim of expanding beyond… Read More
22 Oct 07:03

Tesla owners are ignoring autopilot safety advice and putting the results on YouTube

by Rich McCormick
Jean-Philippe Encausse

Le conducteur essaye de se tuer ... encore l'humain qui bug

Tesla's latest update to its electric car software allowed its Model S sedans access to self-driving options for the first time, unlocking Autosteer, Auto Lane Change, and Autopark features for use on US roads. Tesla CEO Elon Musk was careful to specify that these features did not turn Tesla's cars into fully autonomous vehicles, but that hasn't stopped some Tesla drivers from getting into some dangerous situations, treating their updated Model S as a proper self-driving car — and filming the results.

Two videos, uploaded to YouTube the day after the update rolled out, already show drivers' Model S cars reacting unpredictably with Autosteer engaged. In one, the vehicle appears to jerk to the right as the driver turns off a highway,...

Continue reading…

20 Oct 07:37

The Wove Band, the world's first flexible display wearable

The Wove Band by Polyera is the first wearable with a flexible touchscreen display. The devices won't be publicly available until 2016, but Polyera CEO Phil Inagaki gave us a behind the scenes look at some prototypes, and talked with us about his design philosophy...(Read...)

20 Oct 07:33

Mini PC Axgio: mise à jour Windows 10 et écran déconnecté avec SARAH

by Cédric Locqueneux
titre_sarahIl y a quelques mois je vous avais présenté un mini PC sous forme de clé HDMI, embarquant un processeur Intel et un système Windows 8, capable de faire tourner le projet S.A.R.A.H. Un mini PC très séduisant pour sa compacité et son prix inférieur à 100€. Depuis est sorti le nouveau système de Microsoft,
19 Oct 19:02

Lenovo hasn't given up on its giant table-top PC

by Lauren Goode

It's been a few years since Lenovo first introduced its giant table-top PC as a kind of hybrid solution for both family fun and productivity, and safe to say, we haven't seen many (or any) of these in the wild since then. But that doesn't mean Lenovo, still the world's largest PC maker, has given up on the idea. Today the Chinese PC-maker introduced the Yoga Home 900 Portable All-in-One Desktop.

"Portable" may seem a bit tongue-in-cheek, because the 27-inch, 16-plus-pound machine isn't necessarily something you're going to take with you on your next business trip (and if you do, please send us the video). It's actually positioned as a home computer, meant to offer desktop-grade performance but with some ability to move it from location...

Continue reading…

19 Oct 16:24

Shuoying single-lens 360-degree 1080P Video Camera

by Claire

Shuoying shows their single-lens 360 degree video camera powered by Sunplus SPCA6350M and with a SONY CMOS sensor, records on Micro-SD cards, with a battery life of up to 2 hours of video-recording, 1 hour when using WiFi for realtime 360 video streaming on its 1000mAh battery. Mass production starts in November. Shuoying plans to have a working sample of their next generation 4K 360 degree video camera ready in January.

Please contact Shuoying for more questions:
Thomas Liu, Vice General Manager
Email:thomas@shuoying.com.cn
Mobile:+86 138 2656 8459
Http://www.shuoying.com

19 Oct 14:25

Pour Halloween, Target vous fait visiter une maison hantée à 360° sur YouTube

by François Castro Lara
Pour Halloween, Target vous fait visiter une maison hantée à 360° sur YouTube

Aux États-Unis, la chaîne de distribution Target a mis en place, pour Halloween, une visite de maison hantée à 360° sur YouTube.

Lire la suite sur Creapills.com

18 Oct 12:26

Creepy Screaming Face Pillow

by Staff

Creating a convincing haunted atmosphere in your home by accenting the room with one of these creepy screaming face pillows. This cotton polyester pillow is soft to the touch and creates the illusion some poor soul is trapped inside its fluffy innards.

Check it out

$22.67

18 Oct 12:26

Chameleon Color Changing Pen

by Staff

Create incredible works of art even if you aren't a great artist by coloring in your masterpieces with these Chameleon color changing pens. The revolutionary design of these refillable pens makes it possible to seamlessly blend colors and shade like a professional.

Check it out

$5.00

17 Oct 08:20

Focused Image Cropping with smartcrop.js

by David Walsh

Images tend to make any page more engaging, especially when done right.  The problem is that automating image creation and sizing can be a very difficult task, especially when the image is uploaded by a user — who knows what format, size, and resolution the image will be.  Hell, who knows if they’re actually sending you an image for that matter (though validating that they’ve uploaded an image isn’t too difficult).

I recently found out about smartcrop.js, a brilliant JavaScript utility which analyzes the contents of an image and finds the focal point (a face, for example) of any image.  It’s easy to use and does an outstanding job picking up on the important part of an image.

Check out a few images I put through the smartcrop.js testbed:

I wont bother showing the super simple code sample — you can view that on the smartcrop.js repo.  And be sure to play around on the testbed.  I love recognizing developers for their feats and this is some incredible work by Jonas Wagner!

The post Focused Image Cropping with smartcrop.js appeared first on David Walsh Blog.

16 Oct 20:06

$230 Google Glass by Shenzhen Topsky, Android 4.3, Kopin 640×400, Ingenic

by Charbax
Jean-Philippe Encausse

Google Glass

Topsky is the only supplier of headmounted micro-display based devices that I have seen at Hong Kong tradeshows who may be able to provide a Google Glass like experience starting at $230 per unit for a 100-unit minimum order quantity. The only negative things about it is the Ingenic MIPS dual-core processor (instead of using Rockchip or Allwinner ARM solutions), and the software is "not yet" provided by Google, this doesn't run Google Glass UI, it doesn't run Android Wear UI, it runs Topsky's custom UI on top of Android, which looks good, but is not quite the same as having Google's support. What could happen though, is Google people watch this video and contact Topsky below, or maybe some hackers get it and improve it somehow. I look forward to try it out some more to see about the voice-control capabilities, if it hooks into Google Now yet or some other voice command Android app, and how in general it may or may not manage to use any other Android app that may work good for this user interface. My dream is still (since CeBIT 2005) to live-stream from my face and see a live chat from anyone watching about which questions to ask the people that I interview in 4K. Crucial for this "vision" is for affordable Google Glass type devices to become available. These need to be mass produced, the price can be lowered further, then the software platform should be open and fully supported by Google and by everyone! Let's make headmounted computing happen!

You can contact Topsky here (thanks for telling them you watched my video!)
Sofia Huang, Sales Manager
sofia@hktopsky.com
Mobile: +86 15815527996
Skype: sofiatopsky

16 Oct 17:23

Minority Report system for drivers improves road safety

by Katy Young
brains4cars-3-assisted-driving-algrorith-road-safety

Despite huge developments, consumers still have a significant wait before driverless cars become a viable option. In the meantime, researchers from Robo Brain and Brain4Cars, have developed a promising system for computer-assisted driving, which uses sensors and deep learning to preempt bad driving and alert the driver before they actually do it.

brains4cars-2-assisted-driving-algrorith-road-safety

The majority of road deaths are caused by drivers attempting unsafe maneuvers, which is why Robo Brain monitors the drivers themselves, as well as external factors. The system is comprised of multiple sensors including cameras, wearable devices, and tactile sensors. The devices monitor the driver in real-time, and an algorithm learns the individual’s driving behavior. Over time it is able to anticipate a dangerous driving maneuver up to 3.5 seconds in advance and provide a warning that will discourage the driver from going ahead with it. For example, it might learn that glancing to the right frequently means the driver is going to overtake and alert them if it is likely to result in a crash.

brains4cars-1-assisted-driving-algrorith-road-safety

The system was created by the Departments of Computer Science at Cornell and Stanford Universities. Researchers collected thousands of miles of natural driving data from a variety of drivers. It combines this data with the individual’s habits, which it can learn very quickly: in recent tests, after just over 100 miles, the system was able to predict the actions of 10 drivers with 90 percent accuracy.

Computer-assisted driving could help to pave the way for acceptance of driverless cars, winning over skeptics by showing the improvements to safety that computers can provide. Are there other technologies that provide a middle ground for users who are apprehensive of big changes?

Website: www.brain4cars.com
Contact: ashesh@cs.cornell.edu

The post Minority Report system for drivers improves road safety appeared first on Springwise.

15 Oct 21:56

La France lance la construction de sa première route photovoltaïque

by Morgan
Il existe de nombreux moyens pour recharger les véhicules électriques. Les plus intéressants, et écologiques, sont sans aucun doute ceux qui tirent parti des énergies renouvelables. Et si en plus, on pouvait recharger un tel véhicule alors qu’il roule, ce serait parfait. Les routes photovoltaïques présentent ainsi un intérêt certain. Et la France s’y met. […]
15 Oct 21:54

Kinect to HoloLens scanning with RoomAlive

by Greg Duncan

On the Coding4Fun blog, we've been doing a theme week focusing on projects related to last week's Hardware announcement. After seeing this project, I thought it cool to continue the theme here too.. :)

Roland Smeenk shared this great project, that just looks awesome...

Rebuilding the HoloLens scanning effect with RoomAlive Toolkit

The initial video that explains the HoloLens to the world contains a small clip that visualizes how it can see the environment. It shows a pattern of large and smaller triangles that gradually overlay the real world objects seen in the video. I decided to try to rebuild this effect in real life by using a projection mapping setup that used a projector and a Kinect V2 sensor.

HoloLens room scan

Prototyping in Shadertoy

First I experimented with the idea by prototyping a pixel shader in Shadertoy. Shadertoy is an online tool that allows developers to prototype, experiment, test and share pixel shaders by using WebGL. I started with a raymarching example by Iñigo Quilez and setup a small scene with a floor, wall and a bench. The calculated 3D world coordinates could then be used for overlaying with a triangle effect. The raymarched geometry would later be replaced by geometry scanned with the Kinect V2. The screenshot below shows what the effect looks like. The source code of this shader can be found on the Shadertoy website.

...

Projection mapping with RoomAlive Toolkit

During Build 2015 Microsoft open sourced a library called the RoomAlive Toolkit that contains the mathematical building blocks for building RoomAlive-like experiences. The library contains tools to automatically calibrate multiple Kinects and projectors so they can all use the same coordinate system. This means ...

Bring Your Own Beamer

The installation was shown at the Bring Your Own Beamer event held on September 25th 2015 in Utrecht, The Netherlands. For this event I made some small artistic adjustments.....

...

Project Information URL: http://smeenk.com/hololens-scanning-effect/

Contact Information:

Follow @CH9
Follow @Coding4Fun
Follow @KinectWindows
Follow @gduncan411

15 Oct 21:51

Un livre interactif dédié aux créatifs qui utilise votre voix pour prolonger l'expérience

by François Castro Lara
Un livre interactif dédié aux créatifs qui utilise votre voix pour prolonger l&#039;expérience

Adobe et Fotolia ont mis au point SOOON, un livre connecté qui a pour objectif d'éduquer les créatifs sur les tendances de demain.

Lire la suite sur Creapills.com

15 Oct 20:03

Enchanting Autumn Forests Photography

by Léa

Le photographe tchèque Janek Sedlář est à l’origine de sublimes photographies de forêts enveloppées dans le manteau du brouillard automnal. Les couleurs de la saison sont parfaitement mises en valeur, offrant un aspect enchanté aux clichés, dont les paysages semblent sortis de contes fantastiques.

enchantingautumnforest10 enchantingautumnforest9 enchantingautumnforest8 enchantingautumnforest7 enchantingautumnforest6 enchantingautumnforest5 enchantingautumnforest4 enchantingautumnforest3 enchantingautumnforest2 enchantingautumnforest1
15 Oct 20:01

L'appartement mobile est arrivé (et c’est un conteneur!)

Alors que le coût des appartements flambe en centre ville et que beaucoup sont vieux et en mauvais état, l'entreprise Kasita a trouvé une solution mobile et technologique pour résoudre la crise du logement.

L'idée a émergé suite à l'expérience réalisée par son fondateur, Jeff Wilson, qui a vécu pendant un an dans une benne à ordures de 10 mètres carrés.

L'appartement mobile
Smart city

lire la suite

15 Oct 19:58

Video system lets you control another human's expressions in real-time

by Chris Plante

Researchers have created a video-modifying program that transfers the expressions of a subject onto live video of another subject's face in real time. The video warrants cliché: it has to be seen to be believed.

The project is a collaboration between researches from University of Erlangen-Nuremberg, Max-Planck-Institude for Informatics, and Stanford University. Footage of the project was been released in September with a paper titled "Real-time Expression Transfer for Facial Reenactment." Here's how the abstract describes what makes their system unique:

The novelty of our approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized...

Continue reading…

15 Oct 06:46

New York Comic Con 2015: A Cosplay Music Video by Sneaky Zebra

by Geeks are Sexy

Our pals from Sneaky Zebra are back with a new cosplay music video, and this time, they’re covering the New York edition of the con! I’ll also include the one from Aggressive Comix below for you guys to check out!

[Sneaky Zebra | Aggressive Comix]

The post New York Comic Con 2015: A Cosplay Music Video by Sneaky Zebra appeared first on Geeks are Sexy Technology News.

15 Oct 06:43

Tesla’s New Autopilot System Is Creepy And Wonderful [Video]

by Geeks are Sexy

The folks from Jalopnik got on board a Tesla Model S and drove with the autopilot system turned on. Let’s just say that even though I was not behind the wheel, just seeing the thing breaking and changing lane by itself kind of freaked me off.

[Jalopnik]

The post Tesla’s New Autopilot System Is Creepy And Wonderful [Video] appeared first on Geeks are Sexy Technology News.

14 Oct 16:02

SOOON by Fotolia, un guide prospectif sur le Design

by FrenchWeb

SOON un guide prospectif sur le design.

The post SOOON by Fotolia, un guide prospectif sur le Design appeared first on FrenchWeb.fr.

14 Oct 15:58

23andMe's latest round of funding might signal a comeback

by Arielle Duhaime-Ross

23andMe wants to expand its reach — and possibly its consumer health product line. The personal genomics company announced today that it had raised $115 million in venture capital financing. Part of that money will help the company expand its operations abroad, according to a press release. But Bloomberg reports that the money will also be used to accelerate work on a "revamped product with health analysis," which 23andMe hopes to launch by the end of this year.

Continue reading…