Shared posts

01 Jun 05:14

Render Bender | YouTube Dance FeudSecond episode from Super...

Render Bender | YouTube Dance Feud

Second episode from Super Deluxe of their frantic 3D art compilation show features examples employing motion captured human motion (with a side portion of YouTube drama):

This week on Render Bender: A celebrity YouTuber goes down in flames.

Animated by Sam Rolfes & Andy Rolfes
Co-Directed by Tim Saccenti
Edited by Matt Posey
Starring Jessica Timlin

Music by:
Shit Robot:

Featured artists:
Andrew Benson
Kevin RamserAndrew Thomas Huang
Esteban DiáconoAntoni TudiscoHelmut Breineder
Oliver Latta


25 May 02:34

When Algorithms Surprise Us[Above] Why walk when you can flop?...

When Algorithms Surprise Us

[Above] Why walk when you can flop? In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance. […]

Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.

Destructive problem-solving

Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways.

Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted.

Solving the Kobayashi Maru test: Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score.

10 May 06:47

Progress is Inevitable

by Dorothy

So much doom :T


10 May 06:22




07 May 22:56

On and On

by Dorothy

I’ll take many


03 May 03:59

A game to better understand the wisdom (and madness) of crowds

by Nathan Yau

This is really worth the time!

You’ve probably heard of the wisdom of crowds. The general idea, popularized by James Surowiecki’s book, is that a large group of non-experts can solve problems collectively better than a single expert. As you can imagine, there are a lot of subtleties and complexities to this idea. Nicky Case helps you understand with a game.

Draw networks, run simulations, and learn in the process. The game takes about a half an hour, so set aside some time to play it through.

Tags: crowds, game, groupthink, Nicky Case

01 May 22:57

Cross-cultural digital instruction

by Tyler Cowen

Well, that’s interesting.

Comparative ethnographic analysis of three middle schools that vary by student class and race reveals that students’ similar digital skills are differently transformed by teachers into cultural capital for achievement. Teachers effectively discipline students’ digital play but in different ways. At a school serving working-class Latino youth, students are told their digital expressions are irrelevant to learning; at a school with mostly middle-class Asian American youth, students’ digital expressions are seen as threats to their ability to succeed academically; and at a private school with mainly wealthy white youth, students’ digital skills are positioned as essential to school success. Insofar as digital competency represents a kind of cultural capital, the minority and working-class students also have that capital. But theirs is not translated into teacher-supported opportunities for achievement.

Here is the AJS piece, by Matthew H. Rafalow.  For the pointer I thank Kevin Lewis.

The post Cross-cultural digital instruction appeared first on Marginal REVOLUTION.

26 Apr 15:44

Here’s what you get when you cross dinosaurs and flowers with deep learning

by Nathan Yau


Neural networks have shown usefulness with a number of things, but here is an especially practical use case. Chris Rodley used neural networks to create a hybrid of a dinosaur book and a flower book. The world may never be the same again.

Tags: dinosaurs, flowers, neural network

18 Apr 05:30



Almost all of my thoughts on jewellery, summarised

17 Apr 18:59

A cluster of tetrapods is seen near the High Island Reservoir in...



A cluster of tetrapods is seen near the High Island Reservoir in Hong Kong. These concrete structures are used to reinforce shoreline defenses and prevent coastal erosion by breaking up incoming waves. The specific shape of the tetrapod allows water to flow around rather than against the concrete and reduces displacement by interlocking.


22°21'41.0"N, 114°22'31.5"E

Source imagery: A Sense of Huber

16 Apr 15:13

It’s All in Your Head: 17 Optical Illusions Prove You Can’t Trust Your Brain

by SA Rogers
[ By SA Rogers in Art & Drawing & Digital. ]

If there’s one thing all these illusions can show you, it’s that you can’t trust your own brain. It’ll blot out peripheral colors right before your eyes to help you focus on a fixed point, turn static images into animations and completely fool you about the colors and shapes you’re looking at.

Troxler’s Effect: Colors Disappear When You Stare

Focus on the center of this image, and before long, the colors will start to disappear, revealing either a blank white space or a gray square. Blink, and the colors are back again.

Try it again with the second image: stare at the black cross in the center. Eventually, the pink dots will disappear altogether and only the green moving circle will remain. Troxler’s effect illustrates the efficient way in which our brains ignore peripheral information around a fixation point.

What Color Are These Hearts?

What color are these hearts? That probably sounds like a stupid question – obviously, some of them are orange and some are purple, right? But when the bars around them are removed, their true color will be revealed. Watch the video to see how it works.

None of These Lines are Actually Zig-Zags

This image appears to show alternating lines of curves and zig-zags. In reality, the lines are exactly the same, but the way the light and dark elements are placed accentuate either the curves of the angles to our brains, making them seem like different shapes altogether. The illusion was created by Kohske Takahashi of Chukyo University in Japan and published in the journal i-Perception.

“In the present study, the change of luminance contrast resulted in the percepts of segmentation, which would disrupt the curvature detection. We propose that the underlying mechanisms for the gentle curve perception and those of obtuse corner perception are competing with each other in an imbalanced way and the percepts of corner might be dominant in the visual system.”

White’s Illusion: There’s Only One Shade of Grey Here

The way our eyes adjust to brightness make the very same shade of gray appear either darker or lighter depending on whether it’s surrounded by black or white, as shown in this image. This is known as White’s Illusion, while the Munker-White Illusion is a similar phenomenon that occurs with alternating bars in different colors.

Next Page - Click Below to Read More:
Its All In Your Head 17 Optical Illusions Prove You Cant Trust Your Brain

Painted People: 31 Works of Art on Human Canvas

Human bodies become exotic animals and crashed cars, or blend almost seamlessly into intricate backgrounds, with careful application of body paint and a bit of acrobatics. These 31 works of art ...

Trick of the Light: 12 Shining Works of Holographic Art

Holograms suggest a depth and dimension that isn't really there, an illusion of matter in space that can range from a low-tech flickering image in a children's book to ghostly computer-generated ...

Surreal Snippets: Gifs Elevated to Mind-Bending Art Form

Gifs are taking over the internet, and that's not always a good thing. Entire conversations are carried out in the form of animated images, replacing words with snippets of out-of-context pop ...

Share on Facebook

[ By SA Rogers in Art & Drawing & Digital. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]

05 Apr 04:10

Slow Machinery.Here in HD

Slow Machinery.

Here in HD

18 Mar 02:39

Epic eye-roll

by Victor Mair

This is really interesting.

Everybody's talking about the eye-roll of the century, the eye-roll that has gone wildly viral in China.  It's undoubtedly the most exciting thing that happened at the Two Sessions of the National People's Congress (NPC) that began on March 5 and will most likely end soon.  It was a foregone conclusion that President Xi Jinping would be crowned de facto Emperor for Life and that his "thought" would be enshrined in the constitution.  What was not expected was a brief but epochal roll of the eyes on the part of one female reporter, Liang Xiangyi 梁相宜 (dressed in blue — I'll call her Ms. Blue or [Ms.] Liang), when another female reporter, Zhang Huijun 张慧君 (dressed in red — I'll call her Ms. Red or [Ms.] Zhang), went on too long and too effusively with her fawning question to a high-ranking CCP official.

You see, everything at the NPC is supposed to be scripted and orchestrated.  There aren't supposed to be any surprises.  Yet, as you can see for yourself, Ms. Liang could not hide her true emotions, which are painfully evident at 0:36 in the following 0:44 video — with an increasingly dramatic buildup to the moment of her monumental recoil:

So what was Zhang Huijun saying that caused Liang Xiangyi to roll her eyes with revulsion?  Here's a transcription and translation of Zhang's logorrheic question (I've bolded the crucial part):

Wǒ shì Měiguó quánměi diànshì[tái] zhíxíng tái zhǎng Zhāng Huìjūn. Wǒ de wèntí shì:  yǐ guǎn zīběn wéi zhǔ zhuǎnbiàn, guózī jiānguǎn zhínéng shì dāngxià dàjiā dōu pǔbiàn guānzhù de yīgè huàtí. Nàme zuòwéi guózī wěi zhǔrèn, 2018 nián zài zhè yī lǐngyù nín jiāng huì tuīchū nǎxiē xīn de jǔcuò? Jīnnián zhèng zhí gǎigé kāifàng 40 zhōunián, wǒmen guójiā yào jìnyībù kuòdà duìwài kāifàng. Xí zǒng shūjì chàngdǎo 'Yīdài Yīlù' de chàngyì, guóqǐ duì 'Yīdài Yīlù' yánxiàn guójiā de tóuzī lìdù jiā dà. Nàme guóyǒu qǐyè dì hǎiwài zīchǎn jiàng rúhé dédào yǒuxiào de jiānguǎn yǐ fángzhǐ guóyǒu qǐyè zīběn de liúshī? Wǒmen tuīchūle nǎxiē jiānguǎn jīzhì? Jiānguǎn de xiàoguǒ yòu rúhé? Qǐng nín wèi dàjiā zuò yīxià jièshào. Xièxiè.


I am Zhang Huijun, executive director of American Multimedia Television USA (AMTV). My question is this:  in the changeover where control of assets / capital is the priority, the supervision of state-owned assets function is immediately a topic that everyone is generally concerned about. So, as the Director of the State-owned Assets Supervision and Administration Commission of the State Council (SASAC), what new measures will you introduce in this sphere during 2018? This year coincides with the 40th anniversary of reform and opening up. Our country must further expand its opening to the outside world. General Secretary Xi advocated the One Belt One Road initiative, and state-owned enterprises invested more heavily in countries along the Belt and Road. So, how can the overseas assets of state-owned enterprises be effectively supervised to prevent the loss of state-owned enterprise capital? What regulatory mechanisms have we introduced? What about the effect of supervision? Please give us a brief introduction. Thank you.

The phrase that triggered Ms. Blue's sensational eye-roll leading to a head-roll is marked in bold.  Is it a particularly cringe-worthy phrase, or was Ms. Blue's double roll simply a response to the overall length and banality of Ms. Red's question?

Here's my analysis.  I don't think that "the loss of state-owned enterprise capital" was particularly egregious in and of itself.  The immediate trigger that set off Ms. Blue's explosive eye-roll cum head-roll was the sheer ostentatiousness and repetitiousness of "the loss of state-owned enterprise capital" coupled with "how can the overseas assets of state-owned enterprises be effectively supervised" in the first half of the same sentence.  More than that, though, what primed Ms. Blue for her spectacular eye-cum-head-roll was the cumulative effect of so much pretentious, yet somehow empty, verbiage on the part of Ms. Red right from the start.

You can see that Ms. Blue was uncomfortable almost from the very beginning of Ms. Red's long question.  Ms. Red was a known, and not well-liked, quantity before this incendiary encounter at the NPC.  Indeed, she used to work for China Central Television (CCTV) and was fond of boasting of her stellar appearance and overall quality. She calls herself "Liǎnghuì qìzhí jiě 两会气质姐" ("Miss Elegant of the Two Sessions").

Ms. Blue works (probably "worked" as of two days ago) for Shànghǎi dì yī cáijīng 上海第一财经 (China Business Network).  In the Chinese Wikipedia article about CBN, Ms. Liang is listed as a famous reporter for the network.  Ms. Liang was fully aware that Ms. Red was a fake foreigner in the employ of the Chinese government and that her softball questions were worthless for stimulating useful discussion with the official to whom they were addressed.  Ms. Zhang's allegiance is obvious from her reference to "our country".  Moreover, Ms. Liang clearly felt extremely uncomfortable with the unprofessional, zuòzuo 做作 ("affected") manner of Ms. Zhang.

Here are some observations about Ms. Red's long-winded question from mainland graduate students:

I cannot understand it if I only listen to it once.

The first sentence she said was hard to hear clearly since she paused at a weird place. I had to listen to it several times.  [VHM:  I also had to listen to that sentence many times before I could begin to make any sense of it, and I'm still not sure that I fully grasp what she was trying to say.  Not to mention that most of the time she speaks quickly, as though rattling off a prepared, mostly memorized statement.]

I can understand, but she seems quite verbose. The punctuation is a little strange because she often stops in the middle of a sentence.

Zhang's over-articulation and over-composure may well have betrayed the possibility that she is reciting a prepared text. Given her fake position, I think she is in fact a “tuōr 托儿" ("shill; plant; stooge; tool" — one syllable Pekingese word) of the Chinese government.

I have watched this video before. To be honest, I have no idea what Miss Red wants to say in the first and second time.

Incidentally, Ms. Blue and the students from the mainland who find Ms. Red's obsequious babbling to be less than readily comprehensible are not alone.  The gentleman whose head is sandwiched between Red and Blue seems to be concentrating very hard to make sense of what Ms. Red is saying, though some wags say he is squinting at her cleavage.

This (Liang Xiangyi's unforgettable demonstration of reflexive disgust) may not be "the face that launched a thousand ships", but it might just be "the eye-roll that toppled a communist state".

Media reports

"A Reporter Rolled Her Eyes, and China’s Internet Broke", by Paul Mozur, NYT (3/13/18)

"In China, a reporter’s dramatic eye-roll went viral. Then searches of it were censored."  By Simon Denyer, WaPo (3/13/18)

"One Woman Rolls Her Eyes and Captivates a Nation:  The big take-away from the National People’s Congress in China: a disdainful look that flashed through the internet", formerly titled:  "As far as eye-rolls go, this one may be a ‘10’", by Te-Ping Chen and Chun Han Wong, WSJ (3/14/18), by Tom Phillips (3/14/18)

"The Eye Roll That Upstaged Xi Jinping", by Rob Schmitz, CNN (3/14/18)

"Chinese reporter's spectacular eye-roll sparks viral memes and censorship:  Liang Xiangyi showed theatrical disdain for a colleague’s soft-ball question to a minister at a press conference", by Tom Phillips, The Guardian (3/14/18)

"Reporter's viral eye roll causes trouble with Chinese censors ", by Steven Jiang, CNN (3/15/18)

"Minitrue: Do Not Hype Two Sessions Reporter’s Eyeroll", by Samuel Wade, China Digital Times (3/13/18): The following censorship instructions, issued to the media by government authorities, have been leaked and distributed online. The name of the issuing body has been omitted to protect the source.

Urgent notice: all media personnel are prohibited from discussing the Two Sessions blue-clothed reporter incident on social media. Anything already posted must be deleted. Without exception, websites must not hype the episode. (March 13) [Chinese]

[h.t. Ben Zimmer and John Lagerwey; thanks to Mark Liberman, Jinyi Cai, Zeyao Wu, Fangyi Cheng, and Jing Wen]


18 Mar 01:20

Walmart just filed a patent for robot bees amid ongoing battle with Amazon

by Stephen Johnson

Sign of the [end] times

Amid an ongoing battle over the retail and grocery delivery market, Walmart has filed a patent for robotic bees that would pollinate crops just like the real insects.

Read More
18 Mar 01:18

The Entry for WebMD On WebMD


Haha. I’ve caught this a few times...

WebMD Overview
WebMD is a common disease in North America that is transmitted through irrational fear. It can cause anywhere between 1 to 9,803,493 symptoms in three or four mouse clicks. All WebMD infections in both adults and children result in the medically coined “symptom orgy,” otherwise best described as “symptoms on top of symptoms on top of symptoms.”

How Do I Know if I Have WebMD?
WebMD is easy to diagnose because:

  • You’ll be 100% convinced you suffer from 100% of the cancers
  • You believe that a tickle in your throat is a clear sign of death
  • Symptoms seem more likely the harder they are to pronounce
  • You just knew it this whole time that you were dying on the inside. You just did.

WebMD symptoms in literate humans:

  • Abnormal but somehow totally rational paranoia
  • Sudden onset of exact pain/symptom you happen to be reading about
  • Subscribing to the WebMD newsletter
  • Realizing you still have so much of the world to see
  • Realizing you have to see the world before you lose your eyesight
  • Watching Bill Murray in What About Bob? and thinking, “wow, what a well-adjusted, rational character”

How is WebMD Diagnosed?
There are a few different tests you can use to diagnose WebMD. The most proven method is to log on to your browser, click the history tab, and if under RECENTLY CLOSED and RECENTLY VISITED are nothing but WebMD pages, except for one that says “How to Write Your Own Will,” you suffer from WebMD. Other signs of severe WebMD include unexpectedly high phone bills from calling friends in a panic; loss of said friends; vast knowledge of euthanasia laws; and a rabid interest in suing the author of the phrase, “An apple a day keeps the doctor away.”

How is WebMD treated?
An apple a day… kidding. You’re doomed.

What Happens If I Don’t Get My WebMD Treated?
You pray to God for the first time ever and promise that you’ll be a better person if he or she lets this one slide, when deep down you know you’re totally lying. But then you realize the odds of him or her knowing that you’re lying are pretty good since it is fucking God after all, and oh my god this is how it ends!

How do I prevent WebMD?
You can’t. Keep scrolling and browsing, and over time, you will literally be sucked in through your screen and thus become part of After all, that’s exactly what all those images are on its website; they used to be actual people, but they have been turned into testimonial images for each and every one of the entries. So don’t bother fighting it. This is your fate.

13 Mar 15:05

Gary Turner on Twitter: “Not shitting you, but I actually...



Gary Turner on Twitter: “Not shitting you, but I actually just rescued a robot in distress on the street after someone had tipped it over. This is one of those weird dreams, right.…”

10 Mar 04:20

SkyKnit: How an AI Took Over an Adult Knitting Community

by Alexis C. Madrigal

This kind of makes me want to take up knitting?

Janelle Shane is a humorist who creates and mines her material from neural networks, the form of machine learning that has come to dominate the field of artificial intelligence over the last half-decade.

Perhaps you’ve seen the candy-heart slogans she generated for Valentine’s Day: DEAR ME, MY MY, LOVE BOT, CUTE KISS, MY BEAR, and LOVE BUN.

Or her new paint-color names: Parp Green, Shy Bather, Farty Red, and Bull Cream.

Or her neural-net-generated Halloween costumes: Punk Tree, Disco Monster, Spartan Gandalf, Starfleet Shark, and A Masked Box.

Her latest project, still ongoing, pushes the joke into a new, physical realm. Prodded by a knitter on the knitting forum Ravelry, Shane trained a type of neural network on a series of over 500 sets of knitting instructions. Then, she generated new instructions, which members of the Ravelry community have actually attempted to knit.

“The knitting project has been a particularly fun one so far just because it ended up being a dialogue between this computer program and these knitters that went over my head in a lot of ways,” Shane told me. “The computer would spit out a whole bunch of instructions that I couldn’t read and the knitters would say, this is the funniest thing I’ve ever read.”

The human-machine collaboration created configurations of yarn that you probably wouldn’t give to your in-laws for Christmas, but they were interesting. The user citikas was the first to post a try at one of the earliest patterns, “reverss shawl.” It was strange, but it did have some charisma.

Shane nicknamed the whole effort “Project Hilarious Disaster.” The community called it SkyKnit.

The first yarn product of SkyKnit, by the Ravelry user citikas (Ravelry / citikas)

The idea of using neural networks to do computer things has been around for decades. But it took until the last 10 years or so for the right mix of techniques, data sets, chips, and computing power to transform neural networks into deployable technical tools. There are many different kinds suited to different sorts of tasks. Some translate between different languages for Google. Others automatically label pictures. Still others are part of what powers Facebook’s News Feed software. In the tech world, they are now everywhere.

The different networks all attempt to model the data they’ve been fed by tuning a vast, funky flowchart. After you’ve created a statistical model that describes your real data, you can also roll the dice and generate new, never-before-seen data of the same kind.

How this works—like, the math behind it—is very hard to visualize because values inside the model can have hundreds of dimensions and we are humble three-dimensional creatures moving through time. But as the neural-network enthusiast Robin Sloan puts it, “So what? It turns out imaginary spaces are useful even if you can’t, in fact, imagine them.”

Out of that ferment, a new kind of art has emerged. Its practitioners use neural networks not to attain practical results, but to see what’s lurking in the these vast, opaque systems. What did the machines learn about the world as they attempted to understand the data they’d been fed? Famously, Google released DeepDream, which produced trippy visualizations that also demonstrated how that type of neural network processed the textures and objects in its source imagery.

Google’s David Ha has been working with drawings. Sloan is working with sentences. Allison Parrish makes poetry. Ross Goodwin has tried several writerly forms.

But all these experiments are happening inside the symbolic space of the computer. In that world, a letter is just a defined character. It is not dark ink on white paper in a certain shape. A picture is an arrangement of pixels, not oil on canvas.

And that’s what makes the knitting project so fascinating. The outputs of the software had to be rendered in yarn.

Knitting instructions are a bit like code. There are standard maneuvers, repetitive components, and lots of calculation involved. “My husband says knitting is just maths. It’s maths done with string and sticks. You have this many stitches,” said the Ravelry user Woolbeast in the thread about the project. “You do these things in these places that many times, and you have a design, or a shape.”

In practice, knitting patterns contain a lot of abbreviations like k and p, for knit and purl (the two standard types of stitches), st for stitches, yo for yarn over, or sl1 for “slip one stitch purl-wise.” The patterns tend to take a form like this:

row 1: sl1, kfb, k1 (4 sts) o
row 2: sl1, kfb, k to end of row (5 sts)

The neural network knows nothing of how these letters correspond to words like knit or the actual real-world action of knitting. It is just taking the literal text of patterns, and using them as strings of characters in its model of the data. Then, it’s spitting out new strings of characters, which are the patterns people tried to knit.

The project began on December 13 of last year, when a Ravelry user, JohannaB, suggested to Shane that her neural net could be taught to write knitting patterns. The community responded enthusiastically, like the user agadbois, who proclaimed, “I will absolutely teach a computer to knit!!! Or at least help one design a scarf (or whatever godforsaken mangled bit of fabric will come out of this).”

Over the next couple of weeks, they crept toward a data set they could use to build the model. First, they were able to access a fairly standardized set of patterns from, a service run by the knitter J. C. Briar.

Then, Shane began to add submissions crowdsourced from Ravelry’s users. The latter data was messy and filled with oddities and even some NSFW knitted objects. When I expressed surprise at the ribaldry evident in the thread (Knitters! Who knew?), one Ravelry user wanted it noted that the particular forum on which the discussion took place (LSG) has a special role on the site. “LSG (lazy, stupid, and godless) is an 18+ group designed to be swearing-friendly,” the user LTHook told me. “The main forums are family-friendly, and the database tags mature patterns so people can tailor their browsing.”

Thus, the neural network was being fed all kinds of things from this particular LSG community. “A few notable new additions: Opus the Octopus, Dice Bag of Doom, Doctor Who TARDIS Dishcloth, and something merely called ‘The Impaler,’” Shane wrote on the forum. “The number of patterns with tentacles is now alarmingly high,” she said in another post.

When they hit 500 entries, Shane began training the neural network, and slowly feeding some of the new patterns back to the group. The instructions contained some text and some descriptions of rows that looked like actual patterns.

For example, here’s the first 4 rows from one set of instructions that the neural net generated and named “fishcock.”


row 1 (rs): *k3, k2tog, [yo] twice, ssk, repeat from * to last st, k1.
row 2: p1, *p2tog, yo, p2, repeat from * to last st, k1.
row 3: *[p1, k1] twice, repeat from * to last st, p1.
row 4: *p2, k1, p3, k1, repeat from * to last 2 sts, p2.

The network was able to deduce the concept of numbered rows, solely from the texts basically being composed of rows. The system was able to produce patterns that were just on the edge of knittability. But they required substantial “debugging,” as Shane put it.

One user, bevbh, described some of the errors as like “code that won’t compile.” For example, bevbh gave this scenario: “If you are knitting along and have 30 stitches in the row and the next row only gives you instructions for 25 stitches, you have to improvise what to do with your remaining five stitches.”

But many of the instructions that were generated were flawed in complicated ways. They required the test knitters to apply a lot of human skill and intelligence. For example, here is the user BellaG, narrating her interpretation of the fishcock instructions, which I would say is just on the edge of understandability, if you’re not a knitter:

“There’s not a number of stitches that will work for all rows, so I started with 15 (the repeat done twice, plus the end stitch). Rows two, four, five, and seven didn’t have enough stitches, so I just worked the pattern until I got to the end stitch and worked that as written,” she posted to the forum. “Double yarn-overs can’t be just knit or just purled on the recovery rows; you have to knit one and purl the other, so I did that when I got to the double yarn-overs on rows two and six.

The SkyKnit design “fishcock” as interpreted by the Ravelry user BellaG (Ravelry / BellaG)

This kind of “fixing” of the pattern is not unique to the neural-network-generated designs. It is merely an extreme version of a process that knitters have to follow for many kinds of patterns. “My wrestling with the [SkyKnit-generated] ‘tiny baby whale Soto’ pattern was different from other patterns not so much in what needed to be done, as the degree to which I needed to interpret and ‘read between the lines’ to fit it together,” the user GloriaHanlon told me.

An attempt to knit the pattern “tiny baby whale Soto” by the user GloriaHanlon (Ravelry / gloriahanlon)

Historically, knitting patterns have varied in the degree of detail they provided. New patterns are a little more foolproof. Old patterns do not suffer fools. “I agree that an analogy with 19th-century knitting patterns is quite fitting,” the user bevbh said. “Those patterns were often cryptic by our standards. Interpretation was expected.”

But a core problem in knitting the neural-network designs is that there was no actual intent behind the instructions. And that intent is a major part of how knitters come to understand a given pattern.

“When you start a knitting pattern, you know what it is that you’re trying to make (sock, sweater, blanket square) and the pattern often comes with a picture of the finished object, which allows you to see the details. You go into it knowing what the designer’s intention is,” BellaG explained to me. “With the neural-network patterns, there’s no picture, and it doesn’t know what the finished object is supposed to be, which means you don’t know what you’re going to get until you start knitting it. And that affects how you adjust to the pattern ‘mistakes’: The neural network knows the stitch names, but it doesn’t understand what the stitches do. It doesn’t know that a k2tog is knitting two stitches together (a decrease) and a yo is a yarn-over (a lacy increase), so it doesn’t know to keep the stitch counts consistent, or to deliberately change them to make a particular shape.”

Of course, that is what makes neural-network-inspired creativity so beguiling. The computers don’t understand the limitations of our fields, so they often create or ask the impossible. And in so doing, they might just reveal some new way of making or thinking, acting as a bridge into the future of these art forms.

“I like to imagine that some of the techniques and stitch patterns used today [were] invented via a similar process of trying to decipher instructions written by knitters long since passed, on the back of an envelope, in faded ink, assuming all sorts of cultural knowledge that may or may not be available,” the user GloriaHanlon concluded.

The creations of SkyKnit are fully cyborg artifacts, mixing human whimsy and intelligence with machine processing and ignorance. And the misapprehensions are, to a large extent, the point.

05 Mar 23:30

Earth and the moon, separated by 249,000 miles (400,000 km)....

Earth and the moon, separated by 249,000 miles (400,000 km). Sometimes it takes an image like this to really keep things in perspective. /// This photograph was captured by NASA’s OSIRIS-REx spacecraft on September 22, 2017 when it was 804,000 miles (~ 1.3 million km) from Earth and 735,000 miles (~ 1.19 million kilometers) from the Moon. The spacecraft is currently en route to the asteroid Bennu where it will land, collect samples, and then return back to Earth on September 24, 2023. /// Image courtesy of NASA


05 Mar 21:55





15 Feb 23:13

Fantasy map generator

by Nathan Yau

There is literally generatively infinite amount of room in the world for procedurally generated fantasy maps!

This is fun. It’s a fantasy map generator with the following rules:

Project goal is a procedurally generated map for my Medieval Dynasty simulator. Map should be interactive, scalable, fast and plausible. There should be enought space to place at least 500 manors within 7 regions. The imagined area is about 200.000 km2.

Just click and there’s a new map generated on the fly.

Martin O’Leary’s generator is still my favorite, but I think there is plenty of room in the world for procedurally generated fantasy maps.

Tags: fantasy, generator

13 Feb 16:44

yGAN, fluid simulation, tiling.  also, cats.Video from Jason...


Nightmarish cat content :/

yGAN, fluid simulation, tiling.  also, cats.

Video from Jason Salavon demonstrates his ongoing research in generating videos using neural networks with tiled images (such as cat faces) - the end effect is that of a dynamic mosaic:

Jason also had a hand in the GenMo project which is a browser based version for single images, which you can try here

13 Feb 05:07

List: Five More Products Made Just for Women



“Bosses at Doritos have revealed they are to launch a new ‘lady-friendly’ version of the snack which are quieter to eat and a lot less messy.” — New York Post, 2/5/18

- - -

We can’t fault women for loving coffee, but their sipping and slurping can be a distraction and reduce office productivity. Many women often avoid drinking beverages altogether if they know doing so will create a stir. But with the Sippy Cup Quiet Coffee Mug, ladies will be able to enjoy their liquids without garnering any unwanted attention. The design is modeled after sippy cups for children but decorated with inspirational quotes such as, YOU GO GIRL, WORK IT!, and RBG, AMIRITE?

- - -

A new garment to help take up less space in public places hopes to capitalize on the majority of women who fear infringing upon the footprint of those around them. “I just hate feeling like I’m taking up too much space,” says focus group member, Allison Anderson. “The Take Up Less Space Jacket would be perfect for me. I could wear it on the train, in the grocery store, at the movie theater — even at home when my husband is sprawled out on the couch leaving me barely any room to sit."

- - -

“We all know women hate the sound of their own voices,” says Sound of Silence CEO Chad Chadson. “Let’s face it, none of us are fans of women’s voices.” The Sound of Silence microphone aims to lessen the self-consciousness women feel when hearing themselves talk by making them sound more pleasing, whispery, and perhaps even completely silent, to their audiences. Chadson realizes he might run into trouble getting funding for his product, as many companies would rather women not have a microphone at all. But he hopes to sell people on the idea that as long as women think they are being heard, they will feel included.

- - -

This flu season has done quite a number on our sinuses, but women often avoid blowing their noses in public due to embarrassment. Muffle Your Nose Tissues stifle nose-blowing women by slowly suffocating them until they lose consciousness. Consumer Maggie Phillips says, “I hate to be a burden on anyone, so being unconscious while I’m sick sounds like a dream!”

- - -

This high-end device aims to solve the problem of women being seen or heard anywhere. Its lightweight frame makes it easy to carry, while the innovative soundproof material makes virtually anything and everything a woman does while using it completely unnoticeable. Susan Monaghan, a self-proclaimed gadget buff, is excited about the Go Anywhere Portable Soundproof Container. “If I could walk into a room and have no one notice that I’m there, well then that would be… kind of like how it already is.”

07 Feb 09:50


05 Feb 04:03

Check out this remarkable view of New York City captured by the...

Check out this remarkable view of New York City captured by the DigitalGlobe Worldview-3 satellite at an extremely low-angle. We’re pumped to announce that we just added this shot to our Printshop along with four others. You can check out what we added here: particular shot is made possible due to the focal length of the camera in this satellite that is roughly 32 times longer than that of a standard DSLR camera. Within the full expansive Overview, many of the city’s landmarks are clearly visible, including the Statue of Liberty, both JFK and LaGuardia airports, the Freedom Tower, the skyscrapers of Midtown, Central Park, and the George Washington Bridge.

03 Feb 02:52

will knight on Twitter: ““Human Uber,” developed in Japan,...


Are we already living in a dystopia?

will knight on Twitter: ““Human Uber,” developed in Japan, provides a way to attend events remotely using another person’s body. “It’s surprisingly natural” says its inventor, Jin Rekimoto of Sony #emtechasia…

02 Feb 23:30

For New Scientist. Order my book of cartoons ‘Baking with Kafka’...

For New Scientist.
Order my book of cartoons ‘Baking with Kafka’ here:

26 Jan 02:59

Saturday Morning Breakfast Cereal - Hubris


I was SO irritated by the geographic location of the penguins!

Click here to go see the bonus panel!

If you're more irritated about the geographical location of the penguins than the fact that the penguins can talk, I have nothing to say to you.

New comic!
Today's News:
21 Jan 00:10

Marbles on a Kinetic Block Track Perfectly Synchronized with Tchaikovsky’s “Waltz of the Flowers”

by S. Abbas Raza

This is tremendously satisfying!

16 Jan 19:58

newdarkage:Andy Coravos on Twitter: “@instagram strategically...



15 Jan 11:05

Check out this incredible shot of La Sagrada Familia in...



Check out this incredible shot of La Sagrada Familia in Barcelona, Spain. Designed by Catalan architect Antoni Gaudí, the church has been perpetually under construction since 1882. This view also captures the Eixample District - the area of the city that contains apartments with communal courtyards, a strict grid pattern, and octagonal intersections.

See more on Instagram here:

Image captured from helicopter by Ian Harper