Shared posts

16 Dec 18:56

These are the Planters You are Looking For!

by Geeks are Sexy

Etsy user RedwoodStoneworks uses silicone molds and a little plaster to make what have to be the most awesome planters that have ever been created. Don’t believe me? Check those out:

From RedwoodStoneworks:

Each piece is finely finished, I make each one from scratch, sanding and beveling all rough edges for a smooth finely finished look! All details for realistic finish are hand painted.

[Pop Culture Planters]

The post These are the Planters You are Looking For! appeared first on Geeks are Sexy Technology News.

14 Dec 07:18

These face-generating systems are getting rather too creepily good for my liking

by Devin Coldewey

Machine learning models are getting quite good at generating realistic human faces — so good that I may never trust a machine, or human, to be real ever again. The new approach, from researchers at Nvidia, leapfrogs others by separating levels of detail in the faces and allowing them to be tweaked separately. The results are eerily realistic.

The paper, published on preprint repository Arxiv (PDF), describes a new architecture for generating and blending images, particularly human faces, that “leads to better interpolation properties, and also better disentangles the latent factors of variation.”

What that means, basically, is that the system is more aware of meaningful variation between images, and at a variety of scales to boot. The researchers’ older system might, for example, produce two “distinct” faces that were mostly the same except the ears of one are erased and the shirt is a different color. That’s not really distinctiveness — but the system doesn’t know that those are not important pieces of the image to focus on.

It’s inspired by what’s called style transfer, in which the important stylistic aspects of, say, a painting, are extracted and applied to the creation of another image, which (if all goes well) ends up having a similar look. In this case, the “style” isn’t so much the brush strokes or color space, but the composition of the image (centered, looking left or right, etc.) and the physical characteristics of the face (skin tone, freckles, hair).

These features can have different scales, as well — at the fine side, it’s things like individual facial features; in the middle, it’s the general composition of the shot; at the largest scale, it’s things like overall coloration. Allowing the system to adjust all of them changes the whole image, while only adjusting a few might just change the color of someone’s hair, or just the presence of freckles or facial hair.

In the image at the top, notice how completely the faces change, yet obvious markers of both the “source” and “style” are obviously present, for instance the blue shirts in the bottom row. In other cases things are made up out of whole cloth, like the kimono the kid in the very center seems to be wearing. Where’d that come from? Note that all this is totally variable, not just a A + B = C, but with all aspects of A and B present or absent depending on how the settings are tweaked.

None of these are real people. But I wouldn’t look twice at most of these images if they were someone’s profile picture or the like. It’s kind of scary to think that we now have basically a face generator that can spit out perfectly normal looking humans all day long. Here are a few dozen:

It’s not perfect, but it works. And not just for people. Cars, cats, landscapes — all this stuff more or less fits the same paradigm of small, medium and large features that can be isolated and reproduced individually. An infinite cat generator sounds like a lot more fun to me, personally.

The researchers also have published a new data set of face data: 70,000 images of faces collected (with permission) from Flickr, aligned and cropped. They used Mechanical Turk to weed out statues, paintings and other outliers. Given the standard data set used by these types of projects is mostly red carpet photos of celebrities, this should provide a much more variable set of faces to work with. The data set will be available for others to download here soon.

13 Dec 13:21

Salesforce Opens Up Lightning Platform to World’s 7 Million+ Javascript Developers

by KevinSundstrom

In a move that will open its platform to the more than 7 million worldwide Javascript developers, Salesforce today announced its Lightning Web Components Framework; a technology that makes it possible for developers to use the Javascript programming language to customize browser-based web applications built on top of Salesforce’s core capabilities in the same way they might use Javascript to customize the browser side of any other web app.

10 Dec 21:20

Trello acquires Butler to add power of automation

by Ron Miller

Trello, the organizational tool owned by Atlassian, announced an acquisition of its very own this morning when it bought Butler for an undisclosed amount.

What Butler brings to Trello is the power of automation, stringing together a bunch of commands to make something complex happen automatically. As Trello’s Michael Pryor pointed out in a blog post announcing the acquisition, we are used to tools like IFTTT, Zapier and Apple Shortcuts, and this will bring a similar type of functionality directly into Trello.

Screenshot: Trello

“Over the years, teams have discovered that by automating processes on Trello boards with the Butler Power-Up, they could spend more time on important tasks and be more productive. Butler helps teams codify business rules and processes, taking something that might take ten steps to accomplish and automating it into one click,” Pryor wrote.

This means that Trello can be more than a static organizational tool. Instead, it can move into the realm of light-weight business process automation. For example, this could allow you to move an item from your To Do board to your Doing board automatically based on dates, or to share tasks with appropriate teams as a project moves through its life cycle, saving a bunch of manual steps that tend to add up.

The company indicated that it will be incorporating the Alfred’s capabilities directly into Trello in the coming months. It will make it available to all levels of users, including the free tier, but they promise more advanced functionality for Business and Enterprise customers when the integration is complete. Pryor also suggested that more automation could be coming to Trello. “Butler is Trello’s first step down this road, enabling every user to automate pieces of their Trello workflow to save time, stay organized and get more done.”

Atlassian bought Trello in 2017 for $425 million, but this acquisition indicates it is functioning quasi-independently as part of the Atlassian family.

10 Dec 08:31

China’s JD.com teams up with Intel to develop ‘smart’ retail experiences

by Jon Russell

Months after it landed a major $550 million investment from Google, China’s JD.com — the country’s second highest-profile investor behind Alibaba — has teamed up with another U.S. tech giant: Intel.

JD and Intel said today that they will set up a “lab” focused on bringing internet-of-things technology into the retail process. That could include new-generation vending machines, advertising experiences, and more.

That future is mostly offline — or, in China tech speak, ‘online-to-offline’ retail — but combining the benefits of e-commerce with brick and mortar physical retail shopping. Already, for example, customers can order ahead of time and come in store for collection, buy items without a checkout, take advantage of ‘smart shelves’ or simply try products in person before they buy them.

Indeed, TechCrunch recently visited a flagship JD ‘7Fresh’ store in Beijing and reported on the hybrid approach that the company is taking.

JD is backed by Chinese internet giant Tencent and valued at nearly $30 billion. The company already works with Intel on personalized shopping experiences, but this new lab is focused on taking things further with new projects and working to “facilitate their introduction to global markets.”

“The Digitized Retail Joint Lab will develop next-generation vending machines, media and advertising solutions, and technologies to be used in the stores of the future, based on Intel architecture,” the companies said in a joint announcement.

JD currently operates three 7Fresh stores in China but it is aiming to expand that network to 30. It has also forayed overseas, stepping into Southeast Asia with the launch of cashier-less stores in Indonesia this year.

07 Dec 12:49

Jibo Shuts Down, Selling Off Robot Parts

by Bret Kinsella

Jibo was one of the best funded and most publicized social robots around. Despite that, it appears the company and robot are no more. An investment management firm in New York purchased the assets on June 20, 2018. According to The Robot Report:

“Social robot maker Jibo has sold its IP assets. According to a former Jibo executive with direct knowledge of the situation, New York-based investment management firm SQN Venture Partners is the new owner.”

Jibo Layoffs in June Preceded Asset Sell-off

Around that same time in June, BostInno reported that Jibo’s California office was marked as permanently closed and layoffs at the Boston Headquarters were “significant.” Co-founded in 2012 by MIT’s Cynthia Breazeal, the company raised $61 million from more than a dozen different investors including blue-chip VC firm Charles River Ventures. Jibo was valued at $200 million and had 95 employees as late as November 2016 according to Pitchbook.

The robot finally came to market in October 2017 but apparently did not establish enough momentum or favorable user reviews to overcome its $899 price point. It also apparently was delayed so much that Indigogo required the company to refund over $3 million in pre-orders before the robot came to market. MediaPost’s Chuck Martin summed up the situation succinctly last week saying:

“The closure comes as no surprise since the company laid off much of its workforce in June…In July, another social robot named Kuri was shut down. That device was from the Bosch Startup Platform and was an award winner launched at CES 2017. Social and home robots are getting better as robot makers continue their quest to create a robot that consumers not only want but also will pay for.”

It Couldn’t Do Much More Than a Headless Robot, i.e. A Smart Speaker

There were many aspects of Jibo’s personality that made it unique. Its physical world interaction was compelling, but design doesn’t always win. Sometimes great design can’t overcome a high price point, limited utility, and consumers not quite sure why they need the product. While Jibo was trying to find its way, the founder’s vision was being realized in a less sophisticated way by headless robots, also known as smart speakers. For $899, Jibo could tell you the news and weather, set a timer, and play music or tell you a joke all in response to natural language interaction. For less than $30 this week, Amazon Echo Dot and Google Home Mini can perform those tasks and much more.

Smart speakers have brought voice accessible media, information, entertainment, and other utilities into the home for a nominal cost and you don’t have to worry about the devices accidentally rolling down the stairs. In some ways, smart speakers are paving the way for social robots by training consumers in new behaviors that involve voice interaction with computers. However, these devices are also delivering the low hanging fruit of benefits that many social robots sought to provide. This means social robots have to deliver clearly differentiated and meaningfully beneficial value beyond what smart speaker-based voice assistants offer today. Those use cases surely exist and we have seen several interesting robot applications in business settings. The unanswered question is what benefits will spur consumers to want and pay for social robots.

Follow @bretkinsella Follow @voicebotai

Amazon Echo Devices on Backorder Worldwide, Echo Dots Particularly Hard to Find in the U.K.

Social Robot Jibo Begins Shipping

The post Jibo Shuts Down, Selling Off Robot Parts appeared first on Voicebot.

07 Dec 12:47

7 things to think about voice

by David Riggs

The next few years will see voice automation take over many aspects of our lives. Although voice won’t change everything, it will be part of a movement that heralds a new way to think about our relationship with devices, screens, our data and interactions.

We will become more task-specific and less program-oriented. We will think less about items and more about the collective experience of the device ecosystem they are part of. We will enjoy the experiences they make possible, not the specifications they celebrate.

In the new world I hope we relinquish our role from the slaves we are today to be being back in control.

Voice won’t kill anything

The standard way that technology arrives is to augment more than replace. TV didn’t kill the radio. VHS and then streamed movies didn’t kill the cinema. The microwave didn’t destroy the cooker.

Voice more than anything else is a way for people to get outputs from and give inputs into machines; it is a type of user interface. With UI design we’ve had the era of punch cards in the 1940s, keyboards from the 1960s, the computer mouse from the 1970s and the touchscreen from the 2000s.

All four of these mechanisms are around today and, with the exception of the punch card, we freely move between all input types based on context. Touchscreens are terrible in cars and on gym equipment, but they are great at making tactile applications. Computer mice are great to point and click. Each input does very different things brilliantly and badly. We have learned to know what is the best use for each.

Voice will not kill brands, it won’t hurt keyboard sales or touchscreen devices — it will become an additional way to do stuff; it is incremental, not cannibalistic.

We need to design around it

Nobody wanted the computer mouse before it was invented. In fact, many were perplexed by it because it made no sense in the previous era, where we used command lines, not visual icons, to navigate. Working with Nokia on touchscreens before the iPhone, the user experience sucked because the operating system wasn’t designed for touch. 3D Touch still remains pathetic because few software designers got excited by it and built for it.

What is exciting about voice is not using ways to add voice interaction to current systems, but considering new applications/interactions/use cases we’ve never seen.

At the moment, the burden is on us to fit around the limitations of voice, rather than have voice work around our needs.

A great new facade

Have you ever noticed that most company desktop websites are their worst digital interface; their mobile site is likely better and the mobile app will be best. Most airline or hotel or bank apps don’t offer pared-down experiences (like was once the case), but their very fastest, slickest experience with the greatest functionality. What tends to happen is that new things get new cap ex, the best people and the most ability to bring change.

However, most digital interfaces are still designed around the silos, workflows and structures of the company that made them. Banks may offer eight different ways to send money to someone or something based around their departments; hotel chains may ask you to navigate by their brand of hotel, not by location.

The reality is that people are task-oriented, not process-oriented. They want an outcome and don’t care how. Do I give a crap if it’s Amazon Grocery or Amazon Fresh or Amazon Marketplace? Not one bit. Voice allows companies to build a new interface on top of the legacy crap they’ve inherited. I get to “send money to Jane today,” not press 10 buttons around their org chart.

It requires rethinking

The first time I showed my parents a mouse and told them to double-click on it I thought they were having a fit on it. The cursor would move in jerks and often get lost. The same dismay and disdain I once had for them, I now feel every time I try to use voice. I have to reprogram my brain to think about information in a new way and to reconsider how my brain works. While this will happen, it will take time.

What gets interesting is what happens to the 8-year-olds who grow up thinking of voice first, what happens when developing nations embrace tablets with voice not desktop PCs to educate. When people grow up with something, their native understanding of what it means and what it makes possible changes. It’s going to be fascinating to see what becomes of this canvas.

Voice as a connective layer

We keep being dumb and thinking of voice as being the way to interact with “a” machine and not as a glue between all machines. Voice is an inherently crap way to get outputs; if a picture states a thousand words, how long will it take to buy a t-shirt. The real value of voice is as a user interface across all devices. Advertising in magazines should offer voice commands to find out more. You should be able to yell at the Netflix carousel, or at TV ads to add products to your shopping list. Voice won’t be how we “do” entire things, it will be how we trigger or finish things.

Proactivity

We’ve only ever assumed we talked to devices first. Do I really want to remember the command for turning on lights in the home and utter six words to make it happen? Do I want to always be asking. Assuming devices are select in when they speak first, it’s fun to see what happens when voice is proactive. Imagine the possibilities:

  • “Welcome home, would you like me to select evening lighting?”
  • “You’re running late for a meeting, should I order an Uber to take you there?”
  • “Your normal Citi Bike station has no bikes right now.”
  • “While it looks sunny now, it’s going to rain later.”

Automation

While many think we don’t want to share personal information, there are ample signs that if we get something in return, we trust the company and there is transparency, it’s OK. Voice will not develop alone, it will progress alongside Google suggesting emails replies, Amazon suggesting things to buy, Siri contextually suggesting apps to use. We will slowly become used to the idea of outsourcing our thinking and decisions somewhat to machines.

We’ve already outsourced a lot; we can’t remember phone numbers, addresses, birthdays — we even rely on images to jar our recollection of experiences, so it’s natural we’ll outsource some decisions.

The medium-term future in my eyes is one where we allow more data to be used to automate the mundane. Many think that voice is asking Alexa to order Duracell batteries, but it’s more likely to be never thinking about batteries or laundry detergent or other low consideration items again nor the subscriptions to be replenished.

There is an expression that a computer should never ask a question for which it can reasonably deduce the answer itself. When a technology is really here we don’t see, notice or think about it. The next few years will see voice automation take over many more aspects of our lives. The future of voice may be some long sentences and some smart commands, but mostly perhaps it’s simply grunts of yes.

06 Dec 21:21

On The Naughty/Nice List Sweater

by staff

Walking the thin line between naughty and nice is easier than ever when you show up dressed in this naughty/nice list sweater. The eye-catching design lets you flip-flop between the naughty and nice list depending on your current mood.

Check it out

$19.58

05 Dec 16:48

I Crashed A Mixed Reality Go Kart Into A Real Barrier

by Ian Hamilton
I Crashed A Mixed Reality Go Kart Into A Real Barrier

I drove 125 miles to K1 Speed in the Los Angeles area coasting at 70 miles per hour most of the way. Now I’m looking at one of K1’s karts on a real-world race track. The seat is low to the ground and I sit down, stretching out my legs on either side of the vehicle and wondering if traditional driving experience will translate.

The kart features a temporary rigging to attach a computer and Oculus Rift VR headset. The speed of the kart is remotely adjustable by the system Master of Shapes is demonstrating. As part of this rigging, lights effectively broadcast the kart’s position to cameras overhead spanning the length of the winding track. There’s even a button on the wheel that could deliver one of the world’s first mixed reality versions of something like Mario Kart.

Sure, it is amazing to wear a VR headset so you can sit in Mushroom Kingdom while seated on a real-world motion platform. But that’s a different caliber of experience from the one I’m testing, which will move my body through the real world in an accurate feedback loop with the way I push the pedals and turn the wheel. It is similar to the “mixed reality” experience we saw in the Oculus Arena at the most recent Oculus Connect VR developer’s conference, which incorporated real-world mapping. Except this time I’ll be moving through real space in a vehicle under my control.

Which brings me back to that button on the wheel — the one that “could deliver one of the world’s first mixed reality versions of something like Mario Kart.” Representatives from Master of Shapes told me not to push the button. They were explicit about it before I got in the kart. The button was intended entirely for development purposes at the moment I sat down.

One day there could be races here at K1 where a kid too young to drive a kart on their own could grab a gamepad and log into the same race as their elder sibling out on the actual “speedway.” One day that button on the wheel could launch a virtual weapon to slow down another player’s kart.

I press down on the pedal and…

Not long after the video above ends there’s a hard left turn and, in my growing confidence blindfolded to the real world, I move my hands into a new position. I should remind you again they told me not to push the button. In fact, they even warned me what would happen if I did. The virtual world would rotate 90 degrees off the physical barriers of the real world.

“Oh ok,” I thought at the time. “That’s bad. Don’t touch the button. Now let me drive the thing.”

So I’m hurtling around that corner and suddenly the world snaps into a new position. In front of my eyes now, directly ahead, is the railing of the virtual track. I panic and can’t remember which foot to use to brake the kart.

Instead, I brace and hope for the best.

I seem to be fine for a few seconds and then BAM!

I took the Rift off and laughed. They told me how to put the kart in reverse and we wheeled it back to the starting line for a reset. The second time, I went slow for the first lap and then really pressed the pedal down for the second one. It all worked fine for a few laps as I came back to where I started in the real world.

Tagged with: Master of Shapes

The post I Crashed A Mixed Reality Go Kart Into A Real Barrier appeared first on UploadVR.

04 Dec 07:26

AsReader RFID Reader/Writer, Barcode Scanner, SoftScan and more

by Charbax

At the 2018 IDTechEx Show! in Santa Clara, AsReader, Inc. showcases a variety of hardware consisting of RFID Reader/Writers, 1D and 2D Barcode Scanners and an all-new medical grade battery/wireless charging-sled with case. From a pocket-sized AsReader Barcode Scanner to the 10m/32ft long-distance GUN-Type RFID Reader and/or Barcode Scanner, AsReader hardware is compatible with most iOS devices including: iPhone 8 Plus/7Plus/6sPlus/6Plus, iPhone 8/7/6s/6, iPhone SE/5s/5, iPod touch 6th/5th Generation and iPad mini 3/2/1. AsReader’s handheld sleds are available with a white or black case for tracking logistics, healthcare patients & medications, retail inventory cycle-counts &markdowns, and event management. For standard barcode scanner, a UHF RFID Reader/Writer or an HF/NFC Reader, come with a royalty-free SDK with APIs to get connect with other software. AsReader also takes orders for Android users with a small MoQ.

04 Dec 07:23

Google is building digital art galleries you can step into

by Lucas Matney

Google wants to help you take a closer look at the art world.

The company’s Arts & Culture app has long been one of the company’s cooler niche apps and one that I often feel guilty about overlooking every time I rediscover it. Today, the company has added another experience into the mix focused on collecting the known works of Dutch master artist Johannes Vermeer and curating them in a single place.

The feature looks a lot like many of the company’s other deep dives, including listicles of factoids, interviews with experts and editorials. What makes this presentation unique is that the company actually constructed a miniature 3D art gallery that can utilize your phone’s AR functionality to plop into physical space in front of you.

With ARCore or ARKit, you can move through the “Pocket Gallery” and get close to the high-resolution captures of the paintings while also bringing up information about the works.

Having just tried it, this is one of those things that honestly doesn’t make a ton of sense to do with phone AR. Having a fully rendered gallery pop on your coffee table is an interesting gimmick, but they probably could have ditched the AR for a fully rendered 3D environment that’s more of a traversable object or just left the immersive views for VR and stuck with 2D exploration on your phone.

Nevertheless, it all makes for some interesting experimentation, and it’s just cool to see Google trying out new things with experiencing digital art in a more immersive way. Google’s Arts & Culture app is available on iOS and Android.

02 Dec 20:43

Bright spots in the VR market

by Jonathan Shieber
Andy Kangpan Contributor
Andy Kangpan is an investor at Two Sigma Ventures, where he helps find, fund, and support early-stage technology companies.

Virtual reality is in a public relations slump. Two years ago the public’s expectations for virtual reality’s potential was at its peak. Many believed (and still continue to believe) that VR would transform the way we connect, interact and communicate in our personal and professional lives.

Google Trends highlighting search trends related to virtual reality over time; the “Note” refers to an improvement in Google’s data collection system that occurred in early 2016

It’s easy to understand why this excitement exists once you put on a head-mounted display. While there are still a limited number of compelling experiences, after you test some of the early successes in the field, it’s hard not to extrapolate beyond the current state of affairs to a magnificent future where the utility of virtual reality technology is pervasive.

However, many problems still exist. The all-in cost for state of the art headsets is still out of reach for the mass market. Most “high-quality” virtual reality experiences still require users to be tethered to their desktops. The setup experience for mass market users is lathered in friction. When it comes down to it, the holistic VR experience is a non-starter for most people. We are effectively in what Gartner refers to as the “trough of disillusionment.”

Gartner’s hype cycle for “Human-Machine Interface” in 2018 places many related VR related fields (e.g. mixed reality, AR, HMDs, etc.) in the “Trough of Disillusionment”

Yet, the virtual reality market has continued its slow march to mass adoption, and there are tangible indicators that suggest we could be nearing an inflection point.

A shift toward sustainable hardware growth

What you do and do not consider a virtual reality display can dramatically impact your view on the state of the VR hardware industry. Head-mounted displays (HMDs) can be categorized in three different ways:

  • Screenless viewers — affordable devices that turn smartphones into a VR experience (e.g. Google Glass, Samsung Gear VR, etc.)
  • Standalone HMDs — devices that are not connected to a computer and can independently run content (e.g. Oculus Go, Lenovo Mirage Solo, etc.)
  • Tethered HMDs — devices that are connected to a desktop computer in order to run content (e.g. HTC Vive, Oculus Pro, etc.)

2018 has seen disappointing progress in aggregate headset growth. The overall market is forecasted to ship 8.9 million headsets in 2018, up from an approximate aggregate shipment of ~8.3 million in 2017, according to IDC. On the surface, those numbers hardly describe a market at its inflection point.

However, most of the decline in growth rate can be attributed to two factors. First, screenless viewers have seen a significant decline in shipments as device manufacturers have stopped shipping them alongside smartphones. In the second quarter of 2018, 409,000 screenless viewers were shipped compared to approximately 1 million in the second quarter of 2017. Second, tethered VR headsets have also declined as manufacturers have slowed down the pricing discounts that acted as a steroid to sales growth in 2017.

Looking at the market for standalone HMDs, however, reveals a more promising figure. Standalone VR headsets grew 417 percent due to the global availability of the Oculus Go and Xiaomi Mi VR. Over time, these headsets are going to be the driver of the VR market as they offer significant advantages compared to tethered headsets.

The shift from tethered to standalone VR headsets is significant. It represents a paradigm shift within the immersive ecosystem, where developers have a truly mobile platform that is powerful enough to enable compelling user experiences.

IDC forecasts for AR/VR headset market share by form factor, 2018–2022

A premium market segment

There are a few names that come to mind when thinking about products that are available for purchase in the VR market: Samsung, Facebook (Oculus), HTC and PlayStation. A plethora of new products from these marquee names — and products from new companies entering the market — are opening the category for a new customer segment.

For the past few years, the market effectively had two segments. The first was a “mass market” segment with notorious devices such as the Google Cardboard and the Samsung Gear, which typically sold for less than $100 and offered severely constrained experiences to consumers. The second segment was a “pro market,” with a few notable devices, such as the HTC Vive, that required absurdly powerful computing rigs to operate, but offered consumers more compelling, immersive experiences.

It’s possible that this new emerging segment will dramatically open up the total addressable VR market. This “premium” market segment offers product alternatives that are somewhat more expensive than the mass market, but are significantly differentiated in the potential experiences that can be offered (and with much less friction than the “pro market”).

The Oculus Go, the Xiaomi Mi VR and the Lenovo Solo are the most notable products in this segment. They are the fastest growing devices in this segment, and represent a new wave of products that will continue to roll out. This segment could be the tipping point for when we move from the early adopters to the early majority in the VR product adoption curve.

A number of other products that fall into this category have also been released throughout 2018, such as Lenovo’s Mirage Solo and Xiaomi’s Mi VR. Even more so, Oculus recently announced that they’ll be shipping a new headset called Quest this spring, which will sell for $399 and will be the most powerful example of a premium device to date. The all-in price range of ~$200–400 places these devices in a segment consumers are already conditioned to pay (think iPad’s, gaming consoles, etc.), and they offer differentiated experiences primarily attributed to the fact that they are standalone devices.

30 Nov 21:14

Le groupe Marriott piraté, 300 millions de clients impactés

by Damien Bancal

Plus de 300 millions de clients du groupe hôtelier Marriott impactés par le piratage informatique de la multinationale. Et vous êtes encore étonnés ?! Depuis bientôt 25 ans que ZATAZ existe (20 ans sous la forme de zataz.com) je vous relate, alerte, explique les milliers de fuites de données que je ...

Cet article Le groupe Marriott piraté, 300 millions de clients impactés est apparu en premier sur ZATAZ.

29 Nov 13:07

Phiar raises $3 million for an AR navigation app for drivers

by Lucas Matney

Augmented reality is a very buzzy space, but the fundamental technologies underpinning it are pushing boundaries across a lot of other verticals. Tech like machine learning, object recognition and visual mapping tech are the pillars of plenty of new ventures, enabling there to be companies that thrive in the overlap.

Phiar (pronounced fire) is building an augmented reality navigation app for drivers, but the same tech it’s built to help drivers easily pinpoint where they need to make their next turn also helps them build up rich mapping data that can give partners like autonomous car startups the high-quality data they so deeply need.

The SF-based company has just closed a $3 million seed deal led by Norwest Venture Partners and The Venture Reality Fund. Other investors include Anorak Ventures, Mayfield Fund, Zeno Ventures, Cross Culture Ventures, GFR Fund, Y Combinator, Innolinks Ventures and Half Court Ventures.

While phone and headset-based AR have received a lot of the broader media attention, the automotive industry is a central focus for a lot of augmented reality startups attracted by the proposition of a mobile environment that can showcase and integrate bulky tech. There have certainly been quite a few heads-up display startups looking to take advantage of a car’s windshield real estate, and prior to joining Y Combinator, Phiar was actually looking to build some of this hardware themselves before deciding on a more software-focused route for the company.

Unlike a lot of phone AR apps built on top of Apple or Google’s developers platforms, Phiar’s use case doesn’t quite work with the limitations of these systems which understandably weren’t built with the idea a user would be moving at 60 miles per hour. As a result the company has had to build tech to greater understand the geometry of a quickly updating world through a single camera while ensuring that it’s not just some ugly directional overlay, using techniques like real-time occlusion to ensure that the digital and physical worlds interact nicely.

While the startup’s big consumer-facing play is the free AR mobile app, Phiar is really just an augmented reality company on the surface, its real sell is what it can do with the data and insights gathered from an always-on dash camera. The same object recognition tech that will allow the app to seamlessly toss AR animations onto the scene in front of you is also analyzing that environment and uploading metadata to build up its mapping insights.

In addition, the app saves up to 30 minutes of footage from each ride, offering users the utility of a free dash cam in case they get in an accident and need video for an insurance claim, while providing some rich anonymized data for the company to build up high quality mapping data it can sell to partners.

This kind of data is incredibly useful to companies building autonomous car tech, ride sharing companies and a lot of entities that are interested in access to quickly-updating map data. The challenge for Phiar will be building up enough users so that their map data is as rich as their partners will demand.

CEO Chen-Ping Yu says that the startup is in talks with partners in the automative space to integrate their tech and is also working to bring what they’ve built to companies in the ride-sharing space. Yu says the company plans to release their consumer app in mid-2019.

29 Nov 13:04

AWS launches new time series database

by Ron Miller

AWS announced a new time series database today at AWS re:Invent in Las Vegas. The new product called DynamoDB On-Demand is a fully managed database designed to track items over time, which can be particularly useful for Internet of Things scenarios.

“With time series data each data point consists of a timestamp and one or more attributes and it really measures how things change over time and helps drive real time decisions,” AWS CEO Andy Jassy explained.

He sees a problem though with existing open source and commercial solutions, which says don’t scale well and hard to manage. This is of course a problem that a cloud service like AWS often helps solve.

Not surprising as customers were looking for a good time series database solution, AWS decided to create one themselves. “Today we are introducing Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning,” Danilo Poccia from AWS wrote in the blog post introducing the new service.

Jassy said that they built DynamoDB on-demand from the ground up with an architecture that organizes data by time intervals and enables time series specific data compression, which leads to less scanning and faster performance.

He claims it will be a thousand times faster at a tenth of cost, and of course it scales up and down as required and includes all of the analytics capabilities you need to understand all of the data you are tracking.

This new service is available across the world starting today.

more AWS re:Invent 2018 coverage

29 Nov 13:04

Amazon gets into the blockchain with Quantum Ledger Database & Managed Blockchain

by Sarah Perez

Amazon last year dismissed the idea of getting into the blockchain with AWS, but today that’s changed. The company announced a new service called Amazon Quantum Ledger Database or QLDB, which is a fully managed ledger database with a central trusted authority. The service, which is launching into preview today, offers an append-only, immutable journal that tracks the history of all changes, Amazon said.

And all the changes are cryptographically chained and verifiable.

The company announced the product on stage today at AWS:ReInvent, noting QLDB’s other features – including its transparent nature, ability to automatically scale up or down as needed, ease of use, and speed. The database can execute two to three times more transactions, Amazon claimed, compared with existing products.

It also announced a managed blockchain service.

“It will be really scalable, you’ll have a much more flexible and robust set of APIs for you to make any kind of changes or adjustments to the ledger database,” said Andy Jassy, AWS CEO, in describing the new QLDB offering.

On the QLDB website, Amazon explains the new database in more depth:

Amazon QLDB is a new class of database that eliminates the need to engage in the complex development effort of building your own ledger-like applications. With QLDB, your data’s change history is immutable – it cannot be altered or deleted – and using cryptography, you can easily verify that there have been no unintended modifications to your application’s data. QLDB uses an immutable transactional log, known as a journal, that tracks each application data change and maintains a complete and verifiable history of changes over time. QLDB is easy to use because it provides developers with a familiar SQL-like API, a flexible document data model, and full support for transactions. QLDB is also serverless, so it automatically scales to support the demands of your application. There are no servers to manage and no read or write limits to configure. With QLDB, you only pay for what you use.

QLDB was one of AWS’ blockchain-related announcements today. The company also debuted AWS Managed Blockchain, which can work with QLDB. (More on that here).

“Amazon Managed Blockchain is a fully managed service that allows you to set up and manage a scalable blockchain network with just a few clicks,” Amazon said in an announcement. The product eliminates the overhead required to create the network and automatically scales to meet the demands of thousands of applications running millions of transactions, it said.

It also manages your certificates, lets you easily invite new members to join the network, and tracks operational metrics such as usage of compute, memory, and storage resources.

Managed Blockchain is able to replicate an immutable copy of your blockchain network activity into Amazon Quantum Ledger Database (QLDB), which lets you analyze the network activity outside the network and gain insights into trends.

Interested customers can sign up for Amazon Managed Blockchain preview here.

And those with applications that need an immutable and verifiable ledger database can try out Amazon QLDB here.

more AWS re:Invent 2018 coverage

29 Nov 13:01

U.S. Army Chooses Microsoft’s HoloLens For $480 Million Contract

by Ian Hamilton
U.S. Army Chooses Microsoft’s HoloLens For $480 Million Contract

The United States Army awarded a $480 million contract to Microsoft that will equip military personnel with prototype versions of HoloLens intended to increase “lethality, mobility, and situational awareness.”

HoloLens is an all-in-one augmented reality headset from Microsoft which first shipped in 2016 for $3,000. Its robust tracking system constantly maps the world while overlaying digital objects into the central area of its wearer’s vision. While HoloLens isn’t great for immersive games like Rift, Vive or PSVR headsets that take you to another world, its wireless design, high quality tracking of the real world and high price mean the system is ideal for entirely different use cases. As seen above, it’s been used on the International Space Station and NASA used it to visualize rovers long before they make the trip to Mars. A few developers have also tried carving out a niche building on the headset by delivering applications to companies for internal use.

With the U.S. Army and its $480 million award for an “Integrated Visual Augmentation System,” the plan is to procure “approximately two thousand five hundred & fifty IVAS prototypes (to include hardware, software, and the associated interface control documentation) in four increments or “capability sets”.

“Augmented reality technology will provide troops with more and better information to make decisions,” a statement from Microsoft reads. “This new work extends our longstanding, trusted relationship with the Department of Defense to this new area.”

A report from Bloomberg suggests the award could eventually lead the military to purchase more than 100,000 headsets from MIcrosoft.

You can check out some documents on the Federal Business Opportunities website which outline the overall aims of the Army program. I’ve uploaded the “Statement of Objectives” which was posted in August, which outlines the scope and aim of the program.

“Current and future battles will be fought with small distributed formations in urban and subterranean environments where current capabilities are not sufficient, a recognized training capability gap the Government has sought to fill since 2009. The IVAS will address this shortfall by providing increased sets and repetitions in complex environments,” the document reads. “Soldier lethality will be vastly improved through cognitive training and advanced sensors, enabling squads to be first to detect, decide, and engage.”

Tagged with: HoloLens

The post U.S. Army Chooses Microsoft’s HoloLens For $480 Million Contract appeared first on UploadVR.

26 Nov 20:46

Banuba raises $7M to supercharge any app or device with the ability to really see you

by Mike Butcher

Walking into the office of Viktor Prokopenya — which overlooks a central London park — you would perhaps be forgiven for missing the significance of this unassuming location, just south of Victoria Station in London. While giant firms battle globally to make augmented reality a “real industry,” this jovial businessman from Belarus is poised to launch a revolutionary new technology for just this space. This is the kind of technology some of the biggest companies in the world are snapping up right now, and yet, scuttling off to make me a coffee in the kitchen is someone who could be sitting on just such a company.

Regardless of whether its immediate future is obvious or not, AR has a future if the amount of investment pouring into the space is anything to go by.

In 2016 AR and VR attracted $2.3 billion worth of investments (a 300 percent jump from 2015) and is expected to reach $108 billion by 2021 — 25 percent of which will be aimed at the AR sector. But, according to numerous forecasts, AR will overtake VR in 5-10 years.

Apple is clearly making headway in its AR developments, having recently acquired AR lens company Akonia Holographics and in releasing iOS 12 this month, it enables developers to fully utilize ARKit 2, no doubt prompting the release of a new wave of camera-centric apps. This year Sequoia Capital China, SoftBank invested $50 million in AR camera app Snow. Samsung recently introduced its version of the AR cloud and a partnership with Wacom that turns Samsung’s S-Pen into an augmented reality magic wand.

The IBM/Unity partnership allows developers to integrate into their Unity applications Watson cloud services such as visual recognition, speech to text and more.

So there is no question that AR is becoming increasingly important, given the sheer amount of funding and M&A activity.

Joining the field is Prokopenya’s “Banuba” project. For although you can download a Snapchat-like app called “Banuba” from the App Store right now, underlying this is a suite of tools of which Prokopenya is the founding investor, and who is working closely to realize a very big vision with the founding team of AI/AR experts behind it.

The key to Banuba’s pitch is the idea that its technology could equip not only apps but even hardware devices with “vision.” This is a perfect marriage of both AI and AR. What if, for instance, Amazon’s Alexa couldn’t just hear you? What if it could see you and interpret your facial expressions or perhaps even your mood? That’s the tantalizing strategy at the heart of this growing company.

Better known for its consumer apps, which have been effectively testing their concepts in the consumer field for the last year, Banuba is about to move heavily into the world of developer tools with the release of its new Banuba 3.0 mobile SDK. (Available to download now in the App Store for iOS devices and Google Play Store for Android.) It’s also now secured a further $7 million in funding from Larnabel Ventures, the fund of Russian entrepreneur Said Gutseriev, and Prokopenya’s VP Capital.

This move will take its total funding to $12 million. In the world of AR, this is like a Romulan warbird de-cloaking in a scene from Star Trek.

Banuba hopes that its SDK will enable brands and apps to utilise 3D Face AR inside their own apps, meaning users can benefit from cutting-edge face motion tracking, facial analysis, skin smoothing and tone adjustment. Banuba’s SDK also enables app developers to utilise background subtraction, which is similar to “green screen” technology regularly used in movies and TV shows, enabling end-users to create a range of AR scenarios. Thus, like magic, you can remove that unsightly office surrounding and place yourself on a beach in the Bahamas…

Because Banuba’s technology equips devices with “vision,” meaning they can “see” human faces in 3D and extract meaningful subject analysis based on neural networks, including age and gender, it can do things that other apps just cannot do. It can even monitor your heart rate via spectral analysis of the time-varying color tones in your face.

It has already been incorporated into an app called Facemetrix, which can track a child’s eyes to ascertain whether they are reading something on a phone or tablet or not. Thanks to this technology, it is possible to not just “track” a person’s gaze, but also to control a smartphone’s function with a gaze. To that end, the SDK can detect micro-movements of the eye with subpixel accuracy in real time, and also detects certain points of the eye. The idea behind this is to “Gamify education,” rewarding a child with games and entertainment apps if the Facemetrix app has duly checked that they really did read the e-book they told their parents they’d read.

If that makes you think of a parallel with a certain Black Mirror episode where a young girl is prevented from seeing certain things via a brain implant, then you wouldn’t be a million miles away. At least this is a more benign version…

Banuba’s SDK also includes “Avatar AR,” empowering developers to get creative with digital communication by giving users the ability to interact with — and create personalized — avatars using any iOS or Android device.Prokopenya says: “We are in the midst of a critical transformation between our existing smartphones and future of AR devices, such as advanced glasses and lenses. Camera-centric apps have never been more important because of this.” He says that while developers using ARKit and ARCore are able to build experiences primarily for top-of-the-range smartphones, Banuba’s SDK can work on even low-range smartphones.

The SDK will also feature Avatar AR, which allows users to interact with fun avatars or create personalised ones for all iOS and Android devices. Why should users of Apple’s iPhone X be the only people to enjoy Animoji?

Banuba is also likely to take advantage of the news that Facebook recently announced it was testing AR ads in its newsfeed, following trials for businesses to show off products within Messenger.

Banuba’s technology won’t simply be for fun apps, however. Inside two years, the company has filed 25 patent applications with the U.S. patent office, and of six of those were processed in record time compared with the average. Its R&D center, staffed by 50 people and based in Minsk, is focused on developing a portfolio of technologies.

Interestingly, Belarus has become famous for AI and facial recognition technologies.

For instance, cast your mind back to early 2016, when Facebook bought Masquerade, a Minsk-based developer of a video filter app, MSQRD, which at one point was one of the most popular apps in the App Store. And in 2017, another Belarusian company, AIMatter, was acquired by Google, only months after raising $2 million. It too took an SDK approach, releasing a platform for real-time photo and video editing on mobile, dubbed Fabby. This was built upon a neural network-based AI platform. But Prokopenya has much bolder plans for Banuba.

In early 2017, he and Banuba launched a “technology-for-equity” program to enroll app developers and publishers across the world. This signed up Inventain, another startup from Belarus, to develop AR-based mobile games.

Prokopenya says the technologies associated with AR will be “leveraged by virtually every kind of app. Any app can recognize its user through the camera: male or female, age, ethnicity, level of stress, etc.” He says the app could then respond to the user in any number of ways. Literally, your apps could be watching you.

So, for instance, a fitness app could see how much weight you’d lost just by using the Banuba SDK to look at your face. Games apps could personalize the game based on what it knows about your face, such as reading your facial cues.

Back in his London office, overlooking a small park, Prokopenya waxes lyrical about the “incredible concentration of diversity, energy and opportunity” of London. “Living in London is fantastic,” he says. “The only thing I am upset about, however, is the uncertainty surrounding Brexit and what it might mean for business in the U.K. in the future.”

London may be great (and will always be), but sitting on his desk is a laptop with direct links back to Minsk, a place where the facial recognition technologies of the future are only now just emerging.

21 Nov 07:36

How Marketers Are Using Chatbots To Increase Sales!

by Aparna Sharma

Exploring Just How Marketers Are Using Personalized Chatbots To Drive Business, Lower Costs & Make Customers Happier

Chatbots for Marketers

In today’s day and age, everything is going digital. Nearly everything is becoming automated or streamlined, and businesses are increasingly looking for new ways to implement cutting-edge and innovative technology like artificial intelligence, virtual reality, and augmented reality.

For instance, in recent years, AI-equipped chatbots have become the talk of the town when it comes to modern business. Not only do these bots allow businesses to connect with their customers in new exciting, engaging, and interactive ways, it also helps them streamline some of their customer service processes.

Furthermore, chatbots have become so popular among businesses, that they have even made their way into the marketer’s arsenal of marketing strategies. How? Read more below!

Chatbots offer Personalized Content Automation

Chatbots run off of artificial intelligence technology, so they have the smart technology needed to compile data, analyze it, and then make informed decisions based off it.

When interacting with customers, this technology allows a chatbot to recommend content to customers based off of previous conversations, purchase histories, and other buying trends, totally unique to the individual customer.

Image result for chatbot personalized content

In today’s day and age, the goal of every marketing specialist is to figure out a way to personalize a product, service, or an experience for every unique customer. Now with chatbots, marketers can do just that!

Picture this, imagine you messaged a chatbot for an online retailer inquiring about the availability of a specific product. The chatbot may answer your question first, but then it could recommend a similar product that customers also look at when purchasing the product you had originally inquired about. Now, this is something we see all throughout online retailers, right? But here’s the difference — that chatbot can now message you upon your next visit with a bunch of different recommendations based off of your previous conversations.

Not only can the chatbot recommend products for customers, but it can also then talk about those products in real-time. Customers can interact with the chatbot about different products, ask about potential differences between multiple products, and even ask for recommendations based off of recent reviews! Essentially, each individual customer now has their own personal chatbot shopper, completely tailored to their individual tastes, interests, and preferences. Pretty amazing isn’t it? You could easily see why chatbots are a marketing specialist’s dream!

Digital Solution For A Digital World

At the end of the day, everything comes down to results. For businesses, this is no different. Marketing specialists are constantly tasked with finding ways to drive business through digital means. Chatbots serve as the perfect vehicle for this to happen. And as the world becomes increasingly more geared towards digital technology, you can be sure that businesses will also adapt towards the same.

Driving results means connecting with customers and clients on more than just the surface level. Customers expect a personalized experience now more than ever, and chatbots are exactly what’s going to get them there!

Image result for chatbot funnel

Imagine the Bot as If It Was A Funnel

The best way to explain just how marketers are using bots to drive results is imagine the idea of a funnel. There’s the top of the funnel, where ingredients (in our case, users) are poured in. Then there’s the middle of the funnel, where a mixture of ingredients sort themselves out. Then there’s the bottom of the funnel, where everything comes out in a slow and flowing motion.

Here is a look at the typical website funnel vs a chatbot funnel.

Related image

In the marketer’s eyes, once users are first introduced, the key is to make sure they are taken care of well enough to keep them coming back for more (middle of the funnel). Chatbots are really good at finding out what those interests are and then automatically sending users the right messaging thereby increasing their desire. Once their desire is increase, their odds of buying go way up!

After the user purchases, the bot can then be used to send them order updates, as well as handling logistics like shipping and tracking, and even be used to up-sell similar products!

Bots can manage the entire customer journey from discovery, to desire, to purchase, to delivery, to purchasing again.

Here is an example:

Now, here’s where the bot comes in.

Top Of The Funnel

This is precisely where a bot can help to draw customers in to your business, goods, or services. Bots can help to draw customers in through email campaigns, bot marketing, homepage engagements when users visit your website, and even popups on relevant websites. Whatever you can do to ensure that your bot engages potential users early on will ultimately help to draw customers to the top of the funnel.

Middle Of The Funnel

The middle of the funnel involves everything from purchases, transactions, the exchange of services, etc. This is exactly where everything occurs, and the best way your bot can help out is by ensuring that everything runs smoothly. For instance, your bot can help to provide support by reaching out to customers and making sure that they’re satisfied with their product or service. The bot can help users diagnose potential problems or issues that may arise, and the bot can help to design a personalized solution completely unique to each user.

This is the real spot where your bot can shine!

Bottom Of The Funnel

After your users have completed their purchase, abandoned their cart without a sale, or are seeking shipment and tracking information, your bot can be programmed to reach out to them personally with details related to their specific purchase, their unique order, or items that they have viewed through your website. It can help to recommend other products, offer them a coupon to convince them to pick up that once-abandoned cart, and send out detailed tracking information so that your users can be sure of exactly when they can expect to receive their order.

The key here, is that your bot needs to make it clear that it doesn’t simply forget about the users who complete a sale or leave their products behind without purchasing them. Remember, in an increasingly-digitized world, the most important thing business can do is creating a completely personalized user experience — and bots are no different!

If you need a plug and play Chatbot Solution for your E-Commerce store check out GoBeyond.

How to turn your Shopify Store into a Chatbot using Chatfuel in 5 Minutes

How to Add a Chatbot to your Website in 7min

Need a website bot? If you’re looking to connect with your customers or clients on a more personal level, give .BOT a shot! .BOT is a domain registry service that allows companies to easily register domains for their bots and integrate them into their current workflow.

We recommend using the .BOT domain to get it all started. Here is the full tutorial on how to add a chatbot to your site in 7min.

How to Set Up Amazon .BOT Domain for Your Chatbot


How Marketers Are Using Chatbots To Increase Sales! was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

19 Nov 07:06

30 illustrations qui rendent un dernier hommage à Stan Lee : le “père” des super-héros Marvel

by Thomas R.

Le lundi 12 novembre était une journée noire pour les amateurs de comics et de super-héros. Stan Lee, le célèbre scénariste et éditeur de comics américain, est décédé à l’âge de 95 ans d’une pneumonie aiguë. Figure emblématique de Marvel, Stan Lee est le co-créateur des héros les plus emblématiques de la franchise, à l’image de Spider-Man, Iron Man, Hulk, Thor ou encore Black Panther.

Pour beaucoup, Stan Lee est considéré comme le père des super-héros Marvel et sa disparition touche évidemment des générations entières qui ont grandi et se sont identifiées aux super-héros.

Vous vous en doutez, les artistes ont été nombreux à rendre hommage à une figure aussi emblématique que Stan Lee dans l’univers des comics. Et chez Creapills, on n’a pas pu s’empêcher de vous partager celles qui nous ont touchés. C’est pourquoi nous vous proposons de découvrir ci-dessous 30 des plus beaux hommages créatifs rendus à Stan Lee.

Hommage #1

Crédits : Pink Phuc

Hommage #2

Crédits : chan

Hommage #3

Crédits : junchiu

Hommage #4

Crédits : auro-isa

Hommage #5

Crédits : jonathanserrot

Hommage #6

Crédits : THQNordic

Hommage #7

Crédits : 9gag

Hommage #8

Crédits : beecher-arts

Hommage #9

Crédits : marcellobarenghi

Hommage #10

Crédits : YairDrew092

Hommage #11

Crédits : timelordvictorious

Hommage #12

Crédits : Gary Joe Clement

Hommage #13

Crédits : Camilla Derrico

Hommage #14

Crédits : Steve Breen

Hommage #15

Crédits : Delsdrawings

Hommage #16

Crédits : J. Scott Campbell

Hommage #17

Crédits : Jeff Victorart

Hommage #18

Crédits : Lina Prime

Hommage #19

Crédits : GurkiratSingh1

Hommage #20

Crédits : phantagrafie

Hommage #21

Crédits : hayamiyuuchan

Hommage #22

Crédits : MayankG29747254

Hommage #23

Crédits : eu_amandaiara

Hommage #24

 

Crédits : The Tonus

Hommage #25

Crédits : cheng.hank

Hommage #26

Crédits : chillguydraws

Hommage #27

Crédits : AlifNurAmien1

Hommage #28

 

Crédits : Mark Raats

Hommage #29

Crédits : eighthsun

Hommage #30

Crédits : mullerpereira

Source : boredpanda.com

Cet article 30 illustrations qui rendent un dernier hommage à Stan Lee : le “père” des super-héros Marvel provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

18 Nov 22:01

Supercon: Designing Your Own Diffractive Optics

by Brian Benchoff

Kelly Peng is an electrical and optical engineer, and founder of Kura AR. She’s built a fusion reactor, a Raman spectrometer, a DIY structured light camera, a linear particle accelerator, and emotional classifiers for likes and dislikes. In short, we have someone who can do anything, and she came in to talk about one of the dark arts (pun obviously intended): optics.

The entire idea of Kura AR is to build an immersive augmented reality experience, and when it comes to AR glasses, there are two ways of doing it. You could go the Google Glass route and use a small OLED and lenses, but these displays aren’t very bright. Alternatively, you could use a diffractive waveguide, like the Hololens. This is a lot more difficult to manufacture, but the payoff will be a much larger field of view and a much more immersive experience.

The lens that Kelly is using in her AR headset is basically a diffraction grating, or a series of parallel lines on a piece of plastic. These diffraction gratings reflect light, but it’s dependent on the wavelength. Therefore, for a full-color system, you need three layers, one for red, one for blue, and another for green. The trick here is how to manufacture this. Kelly took a Hololens lens apart and took a look at it with an electron microscope, which appears to be made via fancy, and expensive, photolithography.

There is another way, though. The feature sizes on this diffraction grating aren’t too small, and this could conceivably be done through injection molding. With a lot of coding, simulation, and testing, Kelly realized this was manufacturable with somewhat standard injection molding processes, would cost only about $60,000 upfront, and would produce a part for one dollar. That’s much better than whatever process is going into the Hololens, and an amazing technical feat that’s bring the future of AR closer than ever before.

This talk gets deep into diffractive optics. It’s jam-packed with the kind of technical detail you’ll need to know if you’re going to hack together your own AR / VR system. In short, it’s the kind of real-world technical talk that we love. Sit back with some popcorn and your notepad.

The HackadayPrize2018 is Sponsored by:
14 Nov 21:30

Use Case For Augmented Reality In Design

by Suzanne Scacca
Use Case For Augmented Reality In Design

Use Case For Augmented Reality In Design

Suzanne Scacca

Augmented reality has been on marketers’ minds for years now — and there’s a good reason for it. Augmented reality (or AR) is a technology that layers computer-generated images on top of the real world. With the pervasiveness of the mobile device around the globe, the majority of consumers have instant access to AR-friendly devices. All they need is a smartphone connected to the Internet, a high-resolution screen, and a camera viewfinder. It’s then up to you as a marketer or developer to create digital animations to superimpose on top of their world.

This reality-bending technology is consistently named as one of the hot development and design trends of the year. But how many businesses and marketers are actually making use of it?

As with other cutting-edge technologies, many have been reluctant to adopt AR into their digital marketing strategy.

Part of it is due to the upfront cost of using and implementing AR. There’s also the learning curve to think about when it comes to designing new kinds of interactions for users. Hesitation may also come from marketers and designers because they’re unsure of how to use this technology.

Augmented reality has some really interesting use cases that you should start exploring for your mobile app. The following post will provide you with examples of what’s being done in the AR space now and hopefully inspire your own efforts to bring this game-changing tech to your mobile app in the near future.

The Future Is Here: AR & VR Icon Set

Looking for an icon set that’ll take you on a journey through AR and VR technology? We’ve got your back. Check out the freebie →

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Augmented Reality: A Game-Changer You Can’t Ignore

Unlike virtual reality, which requires users to purchase pricey headsets in order to be immersed in an altered experience, augmented reality is a more feasible option for developers and marketers. All your users need is a device with a camera that allows them to engage with the external world, instead of blocking it out entirely.

And that’s essentially the crux of why AR will be so important for mobile app companies.

This is a technology that enables mobile app users to view the world through your “filter.” You’re not asking them to get lost in another reality altogether. Instead, you want to merge their world with your own. And this is something websites have been unable to accomplish as most interactions are lacking in this level of interactivity.

Let’s take e-commerce websites, for example. Although e-commerce sales increase year after year, people still flock to brick-and-mortar stores in droves (especially for the holiday season). Why? Well, part of it has to do with the fact that they can get their hands on products, test things out and talk to people in real time as they ponder a purchase. Online, it’s a gamble.

As you can imagine, AR in a mobile app can change all that. Augmented reality allows for more meaningful engagements between your mobile app (and brand) and your user. That’s not all though. Augmented reality that connects to geolocation features could make users’ lives significantly easier and safer too. And there’s always the entertainment application of it.

If you’re struggling with retention rates for your app, developing a useful and interactive AR experience could be the key to winning more loyal users in the coming year.

Inspiring Examples Of Augmented Reality

To determine what kind of augmented reality makes the most sense for your website or app, look to examples of companies that have already adopted and succeeded in using this technology.

As Google suggests:

“Augmented reality will be a valuable addition to a lot of existing web pages. For example, it can help people learn on education sites and allow potential buyers to visualize objects in their home while shopping.”

But those aren’t the only applications of AR in mobile apps, which is why I think many mobile app developers and marketers have shied away from it thus far. There are some really interesting examples of this out there though, and I’d like to introduce you to them in the hopes it’ll inspire your own efforts in 2019 and beyond.

Social Media AR

For many of us, augmented reality is already part of our everyday lives, whether we’re the ones using it or we’re viewing content created by others using it. What am I talking about? Social media, of course.

There are three platforms, in particular, that make use of this technology right now.

Snapchat was the first:

Snapchat filter
Trying out a silly filter on Snapchat (Source: Snapchat) (Large preview)

Snapchat could have included a basic camera integration so that users could take and send photos and videos of themselves to others. But it’s taken it a step further with face mapping software that allows users to apply different “filters” to themselves. Unlike traditional filters which alter the gradients or saturation of a photo, however, these filters are often animated and move as the user moves.

Instagram is another social media platform that has adopted this tech:

Instagram filter
Instagram filters go beyond making a face look cute. (Source: Instagram) (Large preview)

Instagram’s Stories allow users to apply augmented filters that “stick” to the face or screen. As with Snapchat, there are some filters that animate when users open their mouths, raise their eyebrows or make other movements with their faces.

One other social media channel that’s gotten into this — that isn’t really a social media platform at all — is Facebook’s Messenger service:

Messenger filters
Users can have fun while sending photos or video chatting on Messenger. (Source: Messenger) (Large preview)

Seeing as how users have flocked to AR filters on Snapchat and Instagram, it makes sense that Facebook would want to get in on the game with its mobile property.

Use Case

Your mobile app doesn’t have to be a major social network in order to reap the benefits of image and video filters.

If your app provides a networking or communication component — in-app chat with other users, photo uploads to profiles and so on — you could easily adopt similar AR filters to make the experience more modern and memorable for your users.

Video Objects AR

It’s not just your users’ faces that can be mapped and altered through the use of augmented reality. Spaces can be mapped as well.

While I will go on to talk about pragmatic applications of space mapping and AR shortly, I do want to address another way in which it can be used.

Take a look at 3DBrush:

3D objects in 3DBrush
Adding 3D objects to video with 3DBrush. (Source: 3DBrush)

At first glance, it might appear to be just another mobile app that enables users to draw on their photos or videos. But what’s interesting about this is the 3D and “sticky” aspects of it. Users can draw shapes of all sizes, colors and complexities within a 3D space. Those elements then stick to the environment. No matter where the users’ cameras move, the objects hold in place.

LeoApp AR is another app that plays with space in a fun way:

LeoApp surface mapping
LeoApp maps a flat surface for object placement. (Source: LeoApp AR) (Large preview)

As you can see here, I’m attempting to map this gorilla onto my desk, but any flat surface will do.

Dancing gorilla projection
A gorilla dances on my desk, thanks to LeoApp AR. (Source: LeoApp AR)

I now have a dancing gorilla making moves all over my workspace. This isn’t the only kind of animation you can put into place and it’s not the only size either. There are other holographic animations that can be sized to fit your actual physical space. For example, if you wanted to chill out side-by-side with them or have them accompany you as you give a presentation.

Use Case

The examples I’ve presented above aren’t the full representation of what can be done with these mobile apps. While users could use these for social networking purposes (alongside other AR filters), I think an even better use of this would be to liven up professional video.

Video plays such a big part in marketing and will continue to do so in the future. It’s also something we can all readily do now with our smartphones; no special equipment is needed.

As such, I think that adding 3D messages or objects into a branded video might be a really cool use case for this technology. Rather than tailor your mobile app to consumers who are already enjoying the benefits of AR on social media, this could be marketed to businesses that want to shake things up for their brand.

Gaming AR

Thanks to all the hubbub surrounding Pokémon Go a few years back, gaming is one of the better known examples of augmented reality in mobile apps today.

Pokemon Go animates environment
My dog hides in the bushes from Pokemon. (Source: Pokémon Go) (Large preview)

The app is still alive and well and that may be because we’re not hearing as many stories about people becoming seriously injured (or even dying) from playing it anymore.

This is something that should be taken into close consideration before developing an AR mobile app. When you ask users to take part in augmented reality outside the safety of a confined space, there’s no way to control what they do afterwards. And that could do some serious damage to your brand if users get injured while playing or just generally wreak havoc out in the public forum (like all those PG users who were banned from restaurants).

This is probably why we see AR more used in games like AR Sports Basketball these days.

Play basketball anywhere
Users can map a basketball hoop onto any flat surface with AR Sports Basketball. (Source: AR Sports Basketball)

The app maps a flat surface — be it a smaller version on a desk or a larger version placed on your floor — and allows users to shoot hoops. It’s a great way to distract and entertain oneself or even challenge friends, family or colleagues to a game of HORSE.

Use Case

You could, of course, build an entire mobile app around an AR game as these two examples have shown.

You could also think of ways to gamify other mobile app experiences with AR. I imagine this could be used for something like a restaurant app. For example, a pizza restaurant wants to get more users to install the app and to order food from them. With a big sporting event like the Super Bowl coming up, a “Play” tab is added to the app, letting users throw pizzas down the field. It would certainly be a fun distraction while waiting for their real pizzas to arrive.

Bottom line: get creative with this. AR games aren’t just for gaming apps.

Home Improvement AR

As you’ve already seen, augmented reality enables us to map physical spaces and stick interactive objects to them. In the case of home improvement, this technology is being used to help consumers make purchasing decisions from the comfort of their home (or at their job or on their commute to work, etc.)

IKEA is one such brand that’s capitalized on this opportunity.

 IKEA product placement
Place IKEA products around your home or office. (Source: IKEA) (Large preview)

To start, here is my attempt at shopping for a new desk for my workspace. I selected the product I was interested in and then I placed it into my office. Specifically, I put the accurately sized 3D desk projection in front of my current desk, so I could get a sense for how the two differ and how this new one would fit.

While product specifications online are all well and good, consumers still struggle with making purchases since they can’t truly envision how those products will (physically) fit into their lives. The IKEA Place app is aiming to change all of that.

IKEA product search
Take a photo with the IKEA map and search related products. (Source: IKEA) (Large preview)

The IKEA app is also improving the shopping experience with the feature above.

Users open their camera and point it at any object they find in the real world. Maybe they were impressed by a bookshelf they saw at a hotel they stayed in or they really liked some patio chairs their friends had. All they have to do is snap a picture and let IKEA pair them with products that match the visual description.

IKEA search results
IKEA pairs app users with relevant product results. (Source: IKEA) (Large preview)

As you can see, IKEA has given me a number of options not just for the chair I was interested in, but also a full table set.

Use Case

If you have or want to build a mobile app that sells products to B2C or B2B consumers and these products need to fit well into their physical environments, think about what a functionality like this would do for your mobile app sales. You could save time having to schedule on-site appointments or conduct lengthy phone calls whereby salespeople try to convince them that the products, equipment or furniture will fit. Instead, you let the consumers try it for themselves.

Self-Improvement AR

It’s not just the physical spaces of consumers that could use improvement. Your mobile app users want to better themselves as well. In the past, they’d either have to go somewhere in person to try on the new look or they’d have to gamble with an online purchase. Thanks to AR, that isn’t the case anymore.

L’Oreal has an app called Style My Hair:

L’Oreal hair color tryout
Try out a new realistic hair color with the L’Oreal app. (Source: Style My Hair) (Large preview)

In the past, these hair color tryouts used to look really bad. You’d upload a photo of your face and the website would slap very fake-looking hair onto your head. It would give users an idea of how the color or style worked with their skin tone, eye shape and so on, but it wasn’t always spot-on which would make the experience quite unhelpful.

As you can see here, not only does this app replace my usually mousy-brown hair color with a cool new blond shade, but it stays with me as I turn my head around:

L’Oreal hair mapping example
L’Oreal applies new hair color any which way users turn. (Source: Style My Hair) (Large preview)

Sephora is another beauty company that’s taking advantage of AR mapping technology.

Sephora makeup testing
Try on beauty products with the Sephora app. (Source: Sephora) (Large preview)

Here is an example of me feeling not so sure about the makeup palette I’ve chosen. But that’s the beauty of this app. Rather than force customers to buy a bunch of expensive makeup they think will look great or to try and figure out how to apply it on their own, this AR app does all the work.

Use Case

Anyone remember the movie The Craft? I totally felt like that using this app.

The Craft magic
The Craft hair-changing clip definitely inspired this example. (Source: The Craft)

If your app sells self-improvement or beauty products, or simply advises users on next steps they should take, think about how AR could transform that experience. You want your users to be confident when making big changes — whether it be how they wear their makeup for date night or the next tattoo they put on their body. This could be what convinces them to take the leap.

Geo AR

Finally, I want to talk about how AR has and is about to transform users’ experiences in the real world.

Now, I’ve already mentioned Pokémon Go and how it utilizes the GPS of a users’ mobile device. This is what enables them to chase those little critters anywhere they go: restaurants, stores, local parks, on vacation, etc.

But what if we look outside the box a bit? Geo-related AR doesn’t just help users discover things in their physical surroundings. It could simply be used as a way to improve the experience of walking about in the real world.

Think about the last time you traveled to a foreign destination. You may have used a translation guidebook to look up phrases you didn’t know. You might have also asked your voice assistant to translate something for you. But think about how great it would be if you didn’t have to do all that work to understand what’s right in front of you. A road sign. A menu. A magazine article.

The Google Translate app is attempting to bridge this divide for us:

Google Translate camera search
Google Translate uses the camera to find foreign text. (Source: Google Translate) (Large preview)

In this example, I’ve scanned an English phrase I wrote out: “Where is the bathroom?” Once I selected the language I wanted to translate from and to, as well as indicated which text I wanted to focus on, Google Translate attempted to provide a translation:

Google provides a translation
Google Translate provides a translation of photographed text. (Source: Google Translate) (Large preview)

It’s not 100% accurate — which may be due to my sloppy handwriting — but it would certainly get the job done for users who need a quick way to translate text on the go.

Use Case

There are other mobile apps that are beginning to make use of this geo-related AR.

For instance, there’s one called Find My Car that I took for a test spin. I don’t think the technology is fully ready yet as it couldn’t accurately “pin” my car’s location, but it’s heading in the right direction. In the future, I expect to see more directional apps — especially, Google and Apple Maps — use AR to improve directional awareness and guidance for users.

Wrapping Up

There are challenges in using AR, that’s for sure. The cost of developing AR is one. Finding the perfect application of AR that’s unique to your brand and truly improves the mobile app user experience is another. There’s also the fact it requires users to download a mobile app, so there’s a lot of work to be done to motivate them to do so.

Gimmicks just won’t work — especially if you expect users to download your app and make use of it (remember: retention rates aren’t just about downloads). You have to make the augmented reality feature something that’s worth engaging. The first place to start is with your data. As Jordan Thomson wrote:

“AR is a lot more dependent on customer activity than VR, which is far older technology and is perhaps most synonymous with gaming. Designers should make use of big data and analytics to understand their customers’ wants and needs.”

I’d also advise you to spend some time in the apps above. Get a sense for how the technology works and discover what makes it so appealing on a personal level. Compare it to your own mobile app’s goals and see if there’s a way to take AR from just being an idea you’re tossing around to a reality.

Smashing Editorial (ra, yk, il)
14 Nov 20:25

Intel annonce le Neural Compute Stick 2

by Pierre Lecourt

C’est à Beijing, pour la conférence Intel AI Devcon, que la marque a décidé de présenter ce Neural Compute Stick 2 qui n’est évidemment pas un produit grand public. La petite clé est toujours aussi compacte et promet des usages d’Intelligence Artificielle et de reconnaissance et analyse d’images grâce à la dernière version de la puce du fondeur.

2018-11-14 15_51_10-minimachines.net

A bord du Neural Compute Stick 2, on retrouve donc la toute dernière solution Myriad X issue du rachat de Movidius. Elle apportera les mêmes capacités que la précédente version avec la possibilité d’utiliser plusieurs clés en parallèle pour accélérer les calculs mais également offrir du prototypage plus facile à bord de tout type de solution.

2018-11-14 15_49_26-minimachines.net

L’exemple le plus frappant est la création de drones pouvant analyser des objets captés par une ou plusieurs caméras afin d’éviter par exemple des obstacles, suivre des personnes ou compter des éléments. 

Ce genre d’usage est en plein essor et de nombreux débouchés sont imaginables pour des systèmes de reconnaissance d’images. On imagine ce que pourraient donner ce genre de montage pour des services de livraisons autonomes, des robots destinés à l’emballage et à la préparation de commandes, des systèmes de vidéo surveillance ou de vidéo verbalisation par exemple. Ce sont en général les premiers éléments qui viennent à l’esprit. Mais les usages sont beaucoup plus variés :

2018-11-14 16_01_21-minimachines.net

Imaginez des systèmes capables de détecter la présence de certaines bactéries dans l’eau en analysant simplement des prélèvements sous un une webcam microscope de manière aisée et sans avoir a recourir à un spécialiste. Cela de manière autonome, sans avoir  a utiliser un serveur distant mais avec le simple apprentissage d’un algorithme en local.

2018-11-14 16_00_50-minimachines.net

Des systèmes capables d’alerter un dermatologue sur une anomalie de la peau pouvant dériver en cancer avec un simple examen visuel, 

2018-11-14 16_05_40-minimachines.net

Ou un robot capable d’analyser des images en ligne et de détecter tout contenu pédophile afin d’alerter les hébergeurs et les autorités sur les fichiers stockés…

Intel poursuit sur sa lancée et en proposant un objet compact et accessible financièrement, la marque permet à de nombreux développeurs d’ajouter ce type de fonctionnalités dans différents montages. Bien entendu la seconde étape sera la commercialisation des puces Myriad X dans des solutions grand public et industrielles. C’est probablement là qu’Intel fera un réel bénéfice. Quand un constructeur automobile, un fabricant de caméras ou un concepteur de robot décidera d’implanter ces puces en série pour ses nouveaux produits.

Intel Movidius Neural Compute Stick : De l’intelligence offline

Source : Intel

Intel annonce le Neural Compute Stick 2 © MiniMachines.net. 2018

08 Nov 21:49

Amazon Echo Show vs. Lenovo Smart Display

by Tyler Lacoma

The Amazon Echo Show and Lenovo Smart Display are two popular smart displays, but which one is right for you? Learn about the displays, speakers, capabilities, and other important features before you decide which to buy.

The post Amazon Echo Show vs. Lenovo Smart Display appeared first on Digital Trends.

06 Nov 20:52

Google launches Cloud Scheduler, a managed cron service

by Frederic Lardinois

Google Cloud is getting a managed cron service for running batch jobs. Cloud Scheduler, as the new service is called, provides all the functionality of the kind of standard command-line cron service you probably love to hate, but with the reliability and ease of use of running a managed service in the cloud.

The targets for Cloud Scheduler jobs can be any HTTP/S endpoints and Google’s own Cloud Pub/Sub topics and App Engine applications. Developers can manage these jobs through a UI in the Google Cloud Console, a command-line interface and through an API.

“Job schedulers like cron are a mainstay of any developer’s arsenal, helping run scheduled tasks and automating system maintenance,” Google product manager Vinod Ramachandran notes in today’s announcement. “But job schedulers have the same challenges as other traditional IT services: the need to manage the underlying infrastructure, operational overhead of manually restarting failed jobs and lack of visibility into a job’s status.”

As Ramachandran also notes, Cloud Scheduler, which is currently in beta, guarantees the delivery of a job to the target, which ensures that important jobs are indeed started and if you’re sending the job to AppEngine or Pub/Sub, those services will also return a success code — or an error code, if things go awry. The company stresses that Cloud Scheduler also makes it easy to automate retries when things go wrong.

Google is obviously not the first company to hit upon this concept. There are a few startups that also offer a similar service, and Google’s competitors like Microsoft also offer comparable tools.

Google provides developers with a free quota of three (3) jobs per month. Additional jobs cost $0.10 per month.

05 Nov 20:47

Having a Bad Day? An Adorable Video Shows AI Learning to Get Dressed

by Dan Robitzski

Rise and Shine

Most animators would agree: making a cataclysmic explosion destroy a planet is easy, but human figures and delicate interactions are hard.

That’s why engineers from The Georgia Institute of Technology and Google Brain teamed up to build a cute little AI agent — an AI algorithm embodied in a simulated world — that learned to dress itself using realistic fabric textures and physics.

Blessed

The AI agent takes the form of a wobbling, cartoonish little friend with an expressionless demeanor.

During its morning routine, our little buddy punches new armholes through its shirts, gets bopped around by perturbations, dislocates its shoulder, and has an automatic gown-enrober smoosh up against its face. What a day!

Great Job!

Beyond a fun video, this simulation shows that AI systems can learn to interact with the physical world, or at least a realistic simulation of it, all on their own.

This is thanks to reinforcement learning, a type of AI algorithm where the agent learns to accomplish tasks by seeking out programmed rewards.

In this case, our little friend was programmed to seek out the warm satisfaction of a job well done, and we’re very proud.

READ MORE: Using machine learning to teach robots to get dressed [BoingBoing]

More on cutesy tech: You Can’t Make This Stuff Up: Amazon Warehouse Robots Slipped On Popcorn Butter

30 Oct 12:04

Espaciel : le réflecteur qui augmente la luminosité naturelle de vos pièces de 50%

by Claire L.

Quels que soient votre habitation et votre mode de vie et malgré la révolution qu’a été l’invention de l’ampoule en 1879 par Thomas Edison, il faut avouer que rien n’est plus agréable que la lumière naturelle du ciel. Pourtant, les inventions pour favoriser l’accès à la lumière directement du ciel restent des avancées très récentes.

On peut évidemment citer Velux qui, dans les années 60, a conçu les premières fenêtres intégrées aux toitures, ou encore Solatube, cette société californienne qui, dans les années 90, a eu l’idée de concevoir un tube qui traverse la toiture pour amener la lumière naturelle dans les pièces (le tout sans fenêtre). Mais tout comme Velux, ces installations coûtent cher et demandent un grand nombre de travaux… jusqu’à l’arrivée en 2013 d’une startup française qui a eu une idée lumineuse.

Un réflecteur pour augmenter de 50% la luminosité de vos pièces ☀️

L’idée d’Espaciel (c’est son nom) est tout simplement de placer des réflecteurs de lumière à l’extérieur devant vos fenêtres, pour amplifier l’entrée de la lumière du jour chez vous. Pour mieux comprendre comment les réflecteurs d’Espaciel fonctionnent, voici une petite vidéo explicative.

Vous l’avez compris, tout le génie de cette invention réside dans le fait qu’elle ne demande pas de travaux pour être mise en place. Mais ces réflecteurs ont d’autres avantages que de pouvoir s’installer seuls et rapidement. Conçus à base de matériaux résistants et d’aluminium, ils s’avèrent 30% plus réfléchissants qu’un miroir tout en étant 5 fois plus légers. Résultat des courses ? Vos pièces bénéficient de 50% de luminosité en plus ; pour un ordre de comparaison, c’est comme si vos pièces gagnaient 2 étages de plus !

Ces mêmes réflecteurs fonctionnent tous les jours de l’année car utilisent la luminosité du ciel et non du soleil pour augmenter le bien-être de votre intérieur ; les effets de la lumière naturelle sur le corps et l’esprit ne sont plus à prouver. Ainsi, ils n’éblouissent pas, sont incassables… et pour couronner le tout, sont fabriqués en France !

Une petite révolution donc pour améliorer la luminosité des pièces de son appartement ou de sa maison. Car la société a pensé à toutes les configurations : après avoir démarré par un réflecteur qui se fixe à la fenêtre, elle a développé d’autres produits, comme le réflecteur balcon ou encore jardin qui, placé sur un pied, se pose où vous le souhaitez pour optimiser un maximum la luminosité de votre logement ! Une idée également judicieuse pour profiter un maximum de la lumière pendant le passage à l’heure d’hiver afin de doper votre vitalité et lutter contre le blues hivernal.

Pour tout savoir sur Espaciel et commander un réflecteur (entre 79 et 259 euros), rendez-vous sur le site espaciel.com.

Réflecteur fenêtre

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Réflecteur balcon

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Réflecteur terrasse

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Crédits : Espaciel

Imaginé par : Espaciel
Source : espaciel.com

Cet article Espaciel : le réflecteur qui augmente la luminosité naturelle de vos pièces de 50% provient du blog Creapills, le média référence des idées créatives et de l'innovation marketing.

30 Oct 10:27

Say ‘Hi’ to Nybble, an open-source robotic kitten

by John Biggs

If you’ve ever wanted to own your own open-source cat, this cute Indiegogo project might be for you. The project, based on something called the Open Cat, is a laser-cut cat that walks and “learns” and can even connect to a Raspberry Pi. Out of the box a complex motion controller allows the kitten to perform lifelike behaviors like balancing, walking and nuzzling.

“Nybble’s motion is driven by an Arduino compatible micro-controller. It stores instinctive ‘muscle memory’ to move around,” wrote its creator, Rongzhong Li. “An optional AI chip, such as Raspberry Pi can be mounted on top of Nybble’s back, to help Nybble with perception and decision. You can program in your favorite language, and direct Nybble walk around simply by sending short commands, such as ‘walk’ or ‘turn left.'”

The cat is surprisingly cute and the life-like movements make it look far more sophisticated than your average toy. You can get a single Nybble for $200 and the team aims to ship in April 2019. You also can just build your own cat for free if you have access to a laser cutter and a few other tools, but the kit itself includes a motion board and complete instructions, which makes the case for paying for a new Nybble pretty compelling. I, for one, welcome our robotic feline overlords.

24 Oct 22:01

Facebook confirms it’s building augmented reality glasses

by Josh Constine

“Yeah! Well of course we’re working on it,” Facebook’s head of augmented reality Ficus Kirkpatrick told me when I asked him at TechCrunch’s AR/VR event in LA if Facebook was building AR glasses. “We are building hardware products. We’re going forward on this . . . We want to see those glasses come into reality, and I think we want to play our part in helping to bring them there.”

This is the clearest confirmation we’ve received yet from Facebook about its plans for AR glasses. The product could be Facebook’s opportunity to own a mainstream computing device on which its software could run after a decade of being beholden to smartphones built, controlled and taxed by Apple and Google.

This month, Facebook launched its first self-branded gadget out of its Building 8 lab, the Portal smart display, and now it’s revving up hardware efforts. For AR, Kirkpatrick told me, “We have no product to announce right now. But we have a lot of very talented people doing really, really compelling cutting-edge research that we hope plays a part in the future of headsets.”

There’s a war brewing here. AR startups like Magic Leap and Thalmic Labs are starting to release their first headsets and glasses. Microsoft is considered a leader thanks to its early HoloLens product, while Google Glass is still being developed for the enterprise. And Apple has acquired AR hardware developers like Akonia Holographics and Vrvana to accelerate development of its own headsets.

Mark Zuckerberg said at F8 2017 that AR glasses were 5 to 7 years away

Technological progress and competition seems to have sped up Facebook’s timetable. Back in April 2017, CEO Mark Zuckerberg said, “We all know where we want this to get eventually, we want glasses,” but explained that “we do not have the science or technology today to build the AR glasses that we want. We may in five years, or seven years.” He explained that “We can’t build the AR product that we want today, so building VR is the path to getting to those AR glasses.” The company’s Oculus division had talked extensively about the potential of AR glasses, yet similarly characterized them as far off.

But a few months later, a Facebook patent application for AR glasses was spotted by Business Insider that detailed using “waveguide display with two-dimensional scanner” to project media onto the lenses. Cheddar’s Alex Heath reports that Facebook is working on Project Sequoia that uses projectors to display AR experiences on top of physical objects like a chess board on a table or a person’s likeness on something for teleconferencing. These indicate Facebook was moving past AR research.

Facebook AR glasses patent application

Last month, The Information spotted four Facebook job listings seeking engineers with experience building custom AR computer chips to join the Facebook Reality Lab (formerly known as Oculus research). And a week later, Oculus’ Chief Scientist Michael Abrash briefly mentioned amidst a half-hour technical keynote at the company’s VR conference that “No off the shelf display technology is good enough for AR, so we had no choice but to develop a new display system. And that system also has the potential to bring VR to a different level.”

But Kirkpatrick clarified that he sees Facebook’s AR efforts not just as a mixed reality feature of VR headsets. “I don’t think we converge to one single device . . . I don’t think we’re going to end up in a Ready Player One future where everyone is just hanging out in VR all the time,” he tells me. “I think we’re still going to have the lives that we have today where you stay at home and you have maybe an escapist, immersive experience or you use VR to transport yourself somewhere else. But I think those things like the people you connect with, the things you’re doing, the state of your apps and everything needs to be carried and portable on-the-go with you as well, and I think that’s going to look more like how we think about AR.”

Oculus Chief Scientist Michael Abrash makes predictions about the future of AR and VR at the Oculus Connect 5 conference

Oculus virtual reality headsets and Facebook augmented reality glasses could share an underlying software layer, though, which might speed up engineering efforts while making the interface more familiar for users. “I think that all this stuff will converge in some way maybe at the software level,” Kirkpatrick said.

The problem for Facebook AR is that it may run into the same privacy concerns that people had about putting a Portal camera inside their homes. While VR headsets generate a fictional world, AR must collect data about your real-world surroundings. That could raise fears about Facebook surveilling not just our homes but everything we do, and using that data to power ad targeting and content recommendations. This brand tax haunts Facebook’s every move.

Startups with a cleaner slate like Magic Leap and giants with a better track record on privacy like Apple could have an easier time getting users to put a camera on their heads. Facebook would likely need a best-in-class gadget that does much that others can’t in order to convince people it deserves to augment their reality.

You can watch our full interview with Facebook’s director of camera and head of augmented reality engineering Ficus Kirkpatrick from our TechCrunch Sessions: AR/VR event in LA:

17 Oct 18:54

Introducing GitHub Actions

by Sarah Drasner

It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you're not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment... all in one place.

Previously, there were only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

But now, enter GitHub Actions.

Actions are small bits of code that can be run off of various GitHub events, the most common of which is pushing to master. But it's not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit... you name it) just to name a couple.

But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more... the sky's the limit.

You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

Setting up your first action

There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

A screenshot of the GitHub Actions beta site showing a large blue button to click to join the beta.
The GitHub Actions beta site.

Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

A screenshot of the sample repo showing the Actions tab in the menu.

If I click on the Actions tab, this screen shows:

screen that shows

I click "Create a New Workflow," and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

new workflow

Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

show all of the action options in the sidebar

There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it's capable of so much more!

I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we'll walk through together.

shows options for azure in the sidebar

At the top where you see the heading "GitHub Action for Azure," there’s a "View source" link. That will take you directly to the repo that's used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the "uses" option in the Actions panel.

Here's a rundown of the options we're provided:

  • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what's creating the connection between them. This piece is abstracted away for you in the GUI, but you'll see in the next section that, if you're working in code, you'll need to keep the references the same to have the chaining work.
  • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
  • Args: This is what you’d expect — it allows you to pass arguments to the container.
  • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

Many of these actions have readmes that tell you what you need. The setup for "secrets" and "env" usually looks something like this:

action "deploy" {
  uses = ...
  secrets = [
    "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET",
  ]
}

You can also string multiple actions together in this GUI. It's very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

Writing an action in code

So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo's master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

Create the app services account

If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

First you'll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

brew update && brew install azure-cli

Then, we’ll log in to Azure by running:

az login

Now, we'll create a Service Principle by running:

az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

It will pass us this bit of output, that we'll use in creating our action:

{
  "appId": "APP_ID",
  "displayName": "ServicePrincipalName",
  "name": "http://ServicePrincipalName",
  "password": ...,
  "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}

What's in an action?

Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

workflow "Name of Workflow" {
  on = "push"
  resolves = ["deploy"]
}

action "deploy" {
  uses = "actions/someaction"
  secrets = [
    "TOKEN",
  ]
}

We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn't specify the order, but rather, is a full list of everything. You can see that we called the action following "deploy" — these strings need to match, that's how they are referencing one another.

Next, we'll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here's a list of all of them). But you can also use another person's repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

We may need some secrets or environment variables defined here and we would use them like this:

action "Deploy Webapp" {
  uses = ...
  args = "run some code here and use a $ENV_VARIABLE_NAME"
  secrets = ["SECRET_NAME"]
  env = {
    ENV_VARIABLE_NAME = "myEnvVariable"
  }
}

Creating a custom action

What we're going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we're taking that first base Azure action you saw in the GUI and building on top of it.

In entrypoint.sh:

#!/bin/sh

set -e

echo "Login"
az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}"

echo "Creating resource group ${APPID}-group"
az group create -n ${APPID}-group -l westcentralus

echo "Creating app service plan ${APPID}-plan"
az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE

echo "Creating webapp ${APPID}"
az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git

echo "Getting username/password for deployment"
DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv`
DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv`

git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}.scm.azurewebsites.net/${APPID}.git

git push azure master

A couple of interesting things to note about this file:

  • set -e in a shell script will make sure that if anything blows up the rest of the file doesn't keep evaluating.
  • The lines following "Getting username/password" look a little tricky — really what they're doing is extracting the username and password from Azure's publishing profiles. We can then use it for the following line of code where we add the remote.
  • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

Now we can work on our main.workflow file!

workflow "New workflow" {
  on = "push"
  resolves = ["Deploy to Azure"]
}

action "Deploy to Azure" {
  uses = "./.github/azdeploy"
  secrets = ["SERVICE_PASS"]
  env = {
    SERVICE_PRINCIPAL="http://sdrasApp",
    TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47",
    APPID="sdrasMoonshine"
  }
}

The workflow piece should look familiar to you — it's kicking off on push and resolves to the action, called "Deploy to Azure."

uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we'll configure this by going here and adding it, in settings:

adding a secret in settings

Finally, we have all of the environment variables we'll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you'd like to name it :)

You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn't work the same way anymore.

Here you can see everything deploying nicely, turning green, and we have our wonderful "Hello World" app that redeploys whenever we push to master 🎉

successful workflow showing green
Hello Work app screenshot

Game changing

GitHub actions aren't only about websites, though you can see how handy they are for them. It's a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that's hosted on Docker directly.

You also don't need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

All in all, it's pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you're hosting your code.

The post Introducing GitHub Actions appeared first on CSS-Tricks.