Gorgeous tributes to the original Star Wars trilogy.
By artist Matt Ferguson.
The post The Many Identities of Bill Nye (The Science Guy!) appeared first on Geeks are Sexy Technology News.
Hübscher Clip über Dan Harmons Variation von Joseph Campbells Heldenreise, die er zu einer zirkulären Storytelling-Theorie verdichtet:
Snip von AV-Club:
The theory boils down to two sentences: 1) A character is in a zone of comfort, 2) but they want something. 3) They enter an unfamiliar situation, 4) adapt to it, 5) get what they wanted, 6) pay a heavy price for it, 7) then return to their familiar situation, 8) having changed. Harmon plots those eight points along the quadrants of a circle, but then overlays that circle with great dualities like life and death, consciousness and unconsciousness, and order and chaos, and finds within that very literal geometry storytelling needs, like internal and external conflict.
On the other hand, without adventurers, the entire brigand economy would collapse.
“Space Seed” by Martin Ansin.
“The capacity to be alone is the capacity to love. It may look paradoxical to you, but it’s not. It is an existential truth: only those people who are capable of being alone are capable of love, of sharing, of going into the deepest core of another person–without possessing the other, without becoming dependent on the other, without reducing the other to a thing, and without becoming addicted to the other. They allow the other absolute freedom, because they know that if the other leaves, they will be as happy as they are now. Their happiness cannot be taken by the other, because it is not given by the other.”
When someone says that they don’t have time to be proactive because they are too busy fighting fires. (HT @SQLSOldier)
NeuralNetwork-Fuzzies am Skolkovo Institute of Science and Technology haben eine künstliche Intelligenz darauf trainiert, Bickrichtungen in Bildern zu korrigieren: DeepWarp: Photorealistic ImageResynthesis for Gaze Manipulation. Erinnert an Tom Whites Smile-Vector und ist genau wie seine Arbeit eine (noch) sehr aufwändige Spielerei, die man derzeit (noch) besser mit Photoshop hinbekommt und die (noch) nur bei Stills funktioniert. (via CreativeAI)
In this work, we consider the task of generating highly-realistic images of a given face with a redirected gaze. We treat this problem as a specific instance of conditional image generation, and suggest a new deep architecture that can handle this task very well as revealed by numerical comparison with prior art and a user study. Our deep architecture performs coarse-to-fine warping with an additional intensity correction of individual pixels.
All these operations are performed in a feed-forward manner, and the parameters associated with different operations are learned jointly in the end-to-end fashion. After learning, the resulting neural network can synthesize images with manipulated gaze, while the redirection angle can be selected arbitrarily from a certain range and provided as an input to the network.
[update] Haha, kurz nach Veröffentlichung des Postings rutschte das hier durch meine Timeline, eine Mimik- und Blickrichtungs-Erkennung für VR.
Da es keine Datensätze für Blickrichtungen gibt, haben sie für ihre Arbeit einfach ein paar Testpersonen in eine Clockwork-Orange-Vorrichtung geklemmt und auf einen wandernden Punkt glotzen lassen:
There are no publicly available datasets suitable for the purpose of the gaze correction task with continuously varying redirection angle. Therefore, we collect our own dataset Figure 4. To minimize head movement, a person places his head on a special stand and follows with her gaze a moving point on the screen in front of the stand. While the point is moving, we record several images with eyes looking in different fixed directions (about 200 for one video sequence) using a webcam mounted in the middle of the screen. For each person we record 2 − 10 sequences, changing the head pose and light conditions between different sequences. Training pairs are collected, taking two images with different gaze directions from one sequence. We manually exclude bad shots, where a person is blinking or where she is not changing gaze direction monotonically as anticipated. Most of the experiments were done on the dataset of 33 persons and 98 sequences.
Hier eine non-algorithmische Alternative mit Kinski und Nosferatu vom großartigen Ensalada-Tumblr:
Musiktipps fürs Arbeiten:Soundtrack von Oblivion (dieser Tom-Cruise-Film von vor ein paar Jahren)Soundtrack von Deus Ex: Human Revolution (Computerspiel von 2011)Ansonsten höre ich auch gerne noch die Soundtracks zur Mass Effect-Serie, auch sehr schön.Der Soundtrack der TV-Serie Dark Matter klang für mich auch verblüffend nahe an Mass Effect.Dass man heutzutage Soundtracks für Computerspiele auf Youtube findet und nicht mehr selbst aus den Spiele-Dateien heraushacken muss, hat meine Lebensqualität echt deutlich gesteígert.
You can't make this shit up: Google patent: Glue would stick pedestrian to self-driving car after collision. (via NewAesthetics)
In a world with self-driving cars, Google envisions the inevitable: accidents involving pedestrians. But the firm is exploring an unusual solution. Think flypaper. The company received a patent Tuesday describing a way to reduce pedestrian injuries in an accident with a robotic vehicle. The impact of the crash, Google suggests, would expose a coating that glues the person to the front of the car.
"The adhesive layer may be a very sticky material and operate in a manner similar to flypaper, or double-sided duct tape," the patent said.
Drink up and be somebody, After one, two, and three glasses of wine.
You’re welcome, time travelers of the future.