Shared posts

27 Jun 05:17

A Dozen Words for Misunderstood: Language and Thoughts

Few fields of study suffer from a more complete public misunderstanding than linguistics. It isn’t uncommon for a linguist to be asked, on meeting a non-linguist, how many languages he or she speaks, or to hear the exclamation, “Oh dear, I must watch my grammar!” Linguists study languages and their structures, but speaking many of them isn’t a job requirement, nor is being a professional grammar scold. A slightly rarer misimpression is usually held by those with just enough knowledge to be dangerous. These people think they flatter a linguist when they say how important linguistics is, “because what we think depends on the words we use to think it.”

This last belief is the bugbear that’s been eating John McWhorter’s trash, and that he hopes to kill off once and for all with his latest book, The Language Hoax: Why the World Looks the Same in Any Language. McWhorter’s writing appears frequently in the liberal New Republic and the conservative City Journal, often on the subject of race and politics. (McWhorter subscribes to a number of political heterodoxies.) But before he went into punditry, McWhorter trained as a linguist and contributed to the study of creolization, the process by which two or more languages coalesce into a full-featured third language.

The belief in question—that the languages we speak shape the thoughts we think—is known in linguistics as the Sapir-Whorf hypothesis, and among the linguistic establishment, Whorfianism has fallen on very bad times indeed. The hypothesis’ namesakes, Edward Sapir and Benjamin Whorf, have been dead for 70 years, and in my own linguistics classes I rarely heard them invoked except to be ridiculed, like biologists of yore who thought maggots grew spontaneously from rotting meat, or historians who thought the world began 6,000 years ago. What Whorfianism claims, in its strongest form, is that our thoughts are limited and shaped by the specific words and grammar we use. Mayans don’t just speak Mayan; they think Mayan, and therefore they think differently from English speakers. According to Sapir-Whorf, a person’s view of the world is refracted through her language, like a pair of spectacles (not necessarily well-prescribed) superglued to his face.

The Herero people use the same word for green and blue, but they have no difficulty distinguishing the color of a leaf and the color of the sky.

Whorf came up with his version of the hypothesis through his study of the language of the Hopi Indians. Hopi, he believed, lacks tense markers, like the “-ed” in “I walked to the store,” or words meaning “before” and “after.” In English we can’t say a sentence about walking to the store without saying when the walking happened. Whorf turned out to be wrong about Hopi time-words and tense-markers, McWhorter notes: Hopi has them. But Whorf viewed Hopi’s supposed lack of them as a sign that the Hopi see the world with less reference to time than we do, and that they are a culturally “timeless” people, living in communion with eternity while we English speakers are slaves to tense markers and clocks.

Perhaps the most famous invocation of Sapir-Whorf is the claim that because Eskimos have dozens of words for snow, they have a mental apparatus that equips them differently—and, one assumes, better—than, say, Arabs, to perceive snow. (I once watched the wintry film Fargo with an Egyptian who called everything from snowflakes to windshield-ice talg—the same word she used for the ice cube in her drink.) To get a hint of why nearly all modern linguists might reject this claim, consider the panoply of snow-words in English (sleet, slush, flurry, whiteout, drift, etc.), and the commonsense question of why we would ever think to attribute Eskimos’ sophisticated and nuanced understanding of snow to their language, rather than the other way around. (“Can you think of any other reason why Eskimos might pay attention to snow?” Harvard’s Steven Pinker once asked.)

McWhorter’s first order of business in his book is to show that the Sapir-Whorf hypothesis is testable, and that the cognitive differences between speakers of different languages turn out to be infinitesimal. A Sapir-Whorfian might expect a speaker of a language with a sophisticated inventory of words for colors—not just Newton’s seven basic hues, but hundreds of names for the gradations in between—to be better at differentiating colors. Unlike English speakers, for example, Russians have distinct words for dark and light blue. But if you show a Russian the two shades of blue, his speed at differentiating them is just 124 milliseconds faster—not even the blink of an eye—than an English speaker’s. Some languages are particularly forlorn in their inventory of color words; the Herero people of southwest Africa use the same word for green and blue. But they have no difficulty distinguishing “the color of a leaf and the color of the sky,” McWhorter notes. “Living on the land as they do would seem to have made it rather difficult to avoid noticing it at least now and then.”

What’s amazing about Sapir-Whorf, given the fairly negligible differences that linguists have thus far detected between language speakers, is that educated people from many other disciplines—philosophy, cultural studies, and literature—claim that their own thoughts are shaped by language in exactly the way Sapir-Whorf proposed, and that linguists now consider risible.

If you look for folk-Whorfianism you’ll see it all over. McWhorter rightly calls its appeal “almost narcotic.” George Orwell’s essay “Politics and the English Language” (1946), which generations of writers have accepted as a model of clarity and common sense, takes as its premise something awfully like Whorfianism: “the slovenliness of our language makes it easier for us to have foolish thoughts.” The novelist Anthony Burgess, who wrote two delightful books on linguistics, lapsed into Whorfianism when explaining why he wanted the protagonist of A Clockwork Orange to borrow the Russian ruka (“rooker,” in the book) as his slang for “hand.” “Russian makes no distinction,” Burgess wrote, “between … hand and arm, which are alike ruka. This limitation would turn my horrible young narrator into a clockwork toy with inarticulate limbs.” A character in a novel by John Crowley rhapsodizes about how his Puerto Rican girlfriend’s Spanish enhances her sexiness by surrounding her with gendered nouns and making the world “a constant congress of male and female, boy and girl.” (What must life be like for the Nasioi speakers of New Guinea, whose language has over 200 genders? Like living in the Castro District of San Francisco, one imagines.)

So The Language Hoax is the work not of John McWhorter, pundit, but of John McWhorter, zombie killer: slayer of an undead theory. He is a talented guide to linguistics, one of a number of gifted lay and professional linguist popularizers, including Burgess (stronger on philology and phonetics than on language and mind), Pinker, Geoffrey Pullum, Robert Lane Greene, and William Safire. McWhorter’s writing is just a bit less graceful than these others, in part because of his chatty habit of including bafflingly irrelevant detail (we learn that one of his teachers looked like a sad Tom Petty, and that McWhorter thinks Nivea cream smells heavenly).

But on the substance, McWhorter is exhaustive, fair-minded, and convincing. He is withering toward most of Sapir-Whorf ’s lay acolytes, yet also generous to the academic linguists and psychologists who have led a minor neo-Whorfian revival. These researchers, such as Lera Boroditsky of the University of California-San Diego, have found minuscule ways—far smaller than anything proposed by the original Whorfians—in which language does affect thought. McWhorter praises Boroditsky and others for elegant and sound experimental design, but says they have resurrected the hypothesis in such a diluted form that their work only serves to show that language barely affects thought at all, as measured cognitively.

Even academic neo-Whorfians haven’t found evidence for a further claim of Whorfians, that languages express “cultural needs”—that the intricacies of grammar reflect cultural facts about the groups who speak it. Whorf’s original hypothesis about the Hopi—that they are “timeless” because their language doesn’t mark tense—is but one example. McWhorter offers another, this time of a language that marks more, not less, than we do in English. Tuyuca, an Amazonian language, marks sentences with endings that show the provenance of the information the speaker is giving. So just as in English we have no choice but to include a marker on our verbs to say when something happened (“he walked to the store”), Tuyuca speakers are forced to add a marker that says how they know the information: I hear (-gí), I see (-í), they say (-yigï). Having these “evidential markers” would, in a Whorfian world, make Tuyuca speakers more attuned to the accuracy and provenance of information—perhaps making them more skeptical, or better on the witness stand.

Again, McWhorter shows that evidential markers just don’t correlate with these broader cultural traits. Korean has them, and researchers have found that Korean children are no better at attributing things than English-speaking children who speak with no evidential markers. Of European languages, Bulgarian is perhaps the only major one that does, and no one seriously thinks Bulgarians are the supreme skeptics of Europe. And plenty of cultures (the ancient Greeks, the French philosophes) seem to have been plenty skeptical without the assistance of Tuyuca-like suffixes. Indeed, this Whorfian attempt at a compliment to the Tuyucans for their skepticism comes across as a slander against the speakers of languages that have no evidential markers. Are those languages less skeptical? Do languages that mark very few things make their speakers ignorant? Chinese syntax is particularly bare in this regard, and its speakers regularly say sentences of the form “he go store,” without explicit marking of time, evidence, or much else. Yet no one considers Chinese speakers especially lacking in discernment. McWhorter encourages us to see linguistic structures as nothing more than products of “chance,” and to accept that we are all “mentally alike.”

McWhorter’s book should convince any doubters that strong-form Whorfianism is a bad idea whose time will never stop coming. So why does this bad idea, apparently without evidence, keep getting re-discovered and confidently touted? It is, after all, not the sort of thing one would expect people to want to believe. It implies that we are prisoners of our dictionaries, and that our words distort our worlds in ways we cannot escape. The belief in a strong version of Sapir-Whorf means that other people are in a sense unknowable, and that some have virtues and skills (skepticism, color perception) that are unattainable to us. Call it a belief in the inevitable inequality of language speakers.

One reason for its persistence, McWhorter proposes, is that we are humble. Social and cognitive science has repeatedly shown that our minds are particularly bad at knowing their limits. A Whorfian linguistic constraint would be just one more such distortion, like the visual blind-spot that is on the retina of every human who has ever lived, but that most of us never learn about unless we take a neuroscience class. It also appeals to our sense that human diversity is greater than we once thought. McWhorter’s anti-Whorfian sentence “all humans are mentally alike” is, for me anyway, not an immediately appealing one. Whorfianism encourages the belief that every language is a beautiful and unique snowflake—or some other snow-entity, thinkable only by Eskimos. But it’s telling, McWhorter notes, that Whorfians tend only to celebrate languages for their differences (making Tuyuca speakers out to be skeptics, say), and never vilify them (making Chinese out to be credulous).

I would propose another attraction: Sapir-Whorf lets us off the hook. The theory suggests that some thoughts or ways of seeing the world are simply not possible for us—and that can be comforting, particularly if it means we’re limited in the same way that thousands or millions of other speakers of our language are limited. Less comforting is the post-Whorfian reality: that our potential thoughts are not limited by constraints of language, but by our own deeper inabilities to imagine, perceive, and feel. For those who thought they could blame words, that is an unsettling thought in any language.

This post originally appeared in the May/June 2014 issue of Pacific Standard as “A Dozen Words for Misunderstood.” For more, subscribe to our print magazine.

17 Jul 20:49

Snowden: "If I end up in chains in Guantanamo, I can live with that"

Edward Snowden has repeatedly stated that he won't be returning to US soil anytime soon. The NSA whistleblower is adamant that President Obama's administration would never grant him a fair trial for leaking classified intel that led to startling disclosures about government surveillance and mass data collection. But he also seems to be at peace with the worst-case outcome, one that would see the former government contractor captured and jailed for espionage. "If I end up in chains in Guantanamo, I can live with that," Snowden said in a recent interview with Guardian editor-in-chief Alan Rusbridger and reporter Ewen MacAskill. The pair spoke with Snowden for more than seven hours.

Obviously Snowden's statement is a bit extreme; the well-spoken computer specialist, still under asylum in Russia, has never refrained from theatrics. But Snowden's point, that a heavy-handed trial and just one judge could quickly throw him behind bars for life, is a concern he's frequently raised ever since fleeing the US. "I’m much happier here in Russia than I would be facing an unfair trial in which I can’t even present a public interest defense to a jury of my peers," he said. "We’ve asked the government again and again to provide for a fair trial, and they’ve declined."

During the interview, Snowden also touched on "bullshit" accusations that he's in cohorts with the Russian government, whether he's being watched at his temporary living quarters ("I think it’s reasonable to assume that I am under surveillance") and George Orwell's 1984. "Popular to contrary belief, I don't think we're in a 1984 universe," Snowden said, being careful to stress that the potential is always there. "We should not bound ourselves to the limits of the author's imagination. Times have shown that the world is much more unpredictable and dangerous than that."

Above all else though, he had a message for professionals — journalists, doctors, accountants, etc. — tasked with protecting source and client confidentiality: beef up security, and use encryption. "What last year's revelations showed us was irrefutable evidence that unencrypted communications on the internet are no longer safe." The Guardian will publish its full interview with Snowden on Friday, but the video interview posted today is absolutely worth a watch.

15 Jul 21:30

Dungeons & Dragons Has Influenced a Generation of Writers

When he was an immigrant boy growing up in New Jersey, the writer Junot Díaz said he felt marginalized. But that feeling was dispelled somewhat in 1981 when he was in sixth grade. He and his buddies, adventuring pals with roots in distant realms — Egypt, Ireland, Cuba and the Dominican Republic — became “totally sucked in,” he said, by a “completely radical concept: role-playing,” in the form of Dungeons & Dragons.

Playing D&D and spinning tales of heroic quests, “we welfare kids could travel,” Mr. Díaz, 45, said in an email interview, “have adventures, succeed, be powerful, triumph, fail and be in ways that would have been impossible in the larger real world.”

“For nerds like us, D&D hit like an extra horizon,” he added. The game functioned as “a sort of storytelling apprenticeship.”

Now the much-played and much-mocked Dungeons & Dragons, the first commercially available role-playing game, has turned 40. In D&D players gather around a table, not a video screen. Together they use low-tech tools like hand-drawn maps and miniature figurines to tell stories of brave and cunning protagonists such as elfish wizards and dwarfish warriors who explore dungeons and battle orcs, trolls and mind flayers. Sacks of dice and vast rule books determine the outcome of the game’s ongoing, free-form story.

Dungeons & Dragons has influenced a shelf full of writers.

For certain writers, especially those raised in the 1970s and ’80s, all that time spent in basements has paid off. D&D helped jump-start their creative lives. As Mr. Díaz said, “It’s been a formative narrative media for all sorts of writers.”

The league of ex-gamer writers also includes the “weird fiction” author China Miéville (“The City & the City”); Brent Hartinger (author of “Geography Club,” a novel about gay and bisexual teenagers); the sci-fi and young adult author Cory Doctorow; the poet and fiction writer Sherman Alexie; the comedian Stephen Colbert; George R. R. Martin, author of the “A Song of Ice and Fire” series (who still enjoys role-playing games). Others who have been influenced are television and film storytellers and entertainers like Robin Williams, Matt Groening (“The Simpsons”), Dan Harmon (“Community”) and Chris Weitz (“American Pie”).

With the release of the rebooted Dungeons & Dragons Starter Set on Tuesday, and more advanced D&D rule books throughout the summer, another generation of once-and-future wordsmiths may find inspiration in the scribbled dungeon map and the secret behind Queen of the Demonweb Pits.

Mr. Díaz, who teaches writing at the Massachusetts Institute of Technology, said his first novel, the Pulitzer Prize-winning “The Brief Wondrous Life of Oscar Wao,” was written “in honor of my gaming years.” Oscar, its protagonist, is “a role-playing-game fanatic.” Wanting to become the Dominican J. R. R. Tolkien, he cranks out “10, 15, 20 pages a day” of fantasy-inspired fiction.

Though Mr. Díaz never became a fantasy writer, he attributes his literary success, in part, to his “early years profoundly embedded and invested in fantastic narratives.” From D&D, he said, he “learned a lot of important essentials about storytelling, about giving the reader enough room to play.”

And, he said, he was typically his group’s Dungeon Master, the game’s quasi-narrator, rules referee and fate giver.

The Dungeon Master must create a believable world with a back story, adventures the players might encounter and options for plot twists. That requires skills as varied as a theater director, researcher and psychologist — all traits integral to writing. (Mr. Díaz said his boyhood gaming group was “more like an improv group with some dice.”)

Sharyn McCrumb, 66, who writes the Ballad Novels series set in Appalachia, was similarly influenced, and in her comic novel “Bimbos of the Death Sun” D&D even helps solve a murder.

“I always, always wanted to be the Dungeon Master because that’s where the creativity lies — in thinking up places, characters and situations,” Ms. McCrumb said. “If done well, a game can be a novel in itself.”

What makes a D&D story different from novels and other narratives is its improvisational and responsive nature. Plotlines are decided as a group. As a D&D player, “you have to convince other players that your version of the story is interesting and valid,” said Jennifer Grouling, an assistant professor of English at Ball State University who studied D&D players for her book, “The Creation of Narrative in Tabletop Role-Playing Games.”

If a Dungeon Master creates “a boring world with an uninteresting plot,” she said, players can go in a completely different direction; likewise, the referee can veto the action of player. “I think D&D can help build the skills to work collaboratively and to write collaboratively,” she added. (Mr. Díaz called this the “social collaborative component” of D&D.)

Ms. Grouling also cited “a sense of control over stories” as a primary reason people like role-playing games. “D&D is completely in the imagination and the rules are flexible — you don’t have the same limitations” of fiction, or even of a programmed video game, she said. A novel is ultimately a finished thing, written, edited and published, its story set in stone. In D&D, the plot is always fluid; anything can happen.

The playwright and screenwriter David Lindsay-Abaire, 44, who wrote the Pulitzer Prize-winning play “Rabbit Hole,” said D&D “harkens back to an incredibly primitive mode of storytelling,” one that was both “immersive and interactive.” The Dungeon Master resembles “the tribal storyteller who gathers everyone around the fire to tell stories about heroes and gods and monsters,” he said. “It’s a live, communal event, where anything can happen in the moment.”

Mr. Lindsay-Abaire said planning D&D adventures was “some of the very first writing that I did.” And the game taught him not just about plot but also about character development.

Playing D&D has also benefited nonfiction writers. “Serving as Dungeon Master helped me develop a knack for taking the existing elements laid out by the game and weaving them into a coherent narrative,” said Scott Stossel, editor of The Atlantic and author of “My Age of Anxiety: Fear, Hope, Dread, and the Search for Peace of Mind.” “And yet you were constrained by the rules of the D&D universe, which in journalism translates into being constrained by the available, knowable facts.”

Mr. Lindsay-Abaire agreed that fictional worlds need rules. “For a story to be satisfying, an audience needs to understand how the world works,” he said. “ ‘The Hunger Games’ is a perfect example of: ‘O.K., these are the rules of this world, now go! Go play in that world.’ ”

Over and over again, Ms. Grouling said, tabletop role players in her survey compared their gaming experience to “starring in their own movies or writing their own novels.”

As for Mr. Díaz, “Once girls entered the equation in a serious way,” he said, “gaming went right out the window.” But he said he still misses D&D’s arcane pleasures and feels its legacy is still with him: “I’m not sure I would have been able to transition from reader to writer so easily if it had not been for gaming.”

14 Jul 05:14

Where Online Services Go When They Die

The author, then 11 years old, using Prodigy for the first time on Christmas Day, 1992 (Benj Edwards)

Michael Doino approached the late hours of October 1, 1999, with a lingering sense of dread.  It was finally time, after 11 years, to pull the plug on Prodigy Classic, a commercial online service he had helped shepherd from a plucky upstart into a nationwide giant.

"It was very bittersweet, very sad," recalls Doino, a veteran project manager at the company. "I had been there before the Prodigy service went live."

Some time before midnight, Doino logged into the main Prodigy Classic server and, as instructed, uploaded a file to redirect Prodigy Classic users to the company's newer Prodigy Internet service.  At that moment, the written record of a massive, unique online culture, including millions of messages and tens of thousands of hand-drawn pieces of digital art, seemingly vanished into thin air.

Doino, front and center, 1999 (Prodigy)

It had no where to go but away. That data was never on the Internet; it existed in a proprietary format on a proprietary network, far out of reach from the technological layman.  It was then shuffled around, forgotten, and perhaps overwritten by a series of indifferent corporate overlords.

Fifteen years later, a Prodigy enthusiast named Jim Carpenter has found an ingenious way to bring some of that data back from the dead. With a little bit of Python code and some old Prodigy software at hand, Carpenter, working alone, recently managed to partially reverse-engineer the Prodigy client and eke out some Prodigy content that was formerly thought to have been lost forever.

"Honestly, I wasn't a huge fan of Prodigy," says Carpenter, a 38 year-old freelance programmer based in Massachusetts, recalling his time on the service around the turn of the 1990s. "I had already been using the Internet for a couple of years and Prodigy seemed so closed in. But I still used Prodigy every single day. It was the graphics."

It was Carpenter's drive to see those graphics once again that got him fiddling with Prodigy clients in late 2012. "Finding decent color screen shots of Prodigy is nearly impossible," says Carpenter.

He knew the sign-on screen was stored on the hard drive, so he began to wonder what else he might find in the client software. Using a hex editor, Carpenter fiddled with the client software until he found even more graphical data. "As far as I knew, the only thing I might be able to get is a screenshot of the set-up options dialog."

And he did.  But what he found next blew his mind.

* * *

When any sizable online service disappears, a piece of our civilization's cultural fabric goes with it. In this case, the missing cultural repository is Prodigy, a consumer-oriented online service that launched in 1988 as a partnership between Sears and IBM. Users accessed it by dialing into regional servers with a personal computer and a modem over traditional telephone lines. Once connected, they could trade emails, participate in online message board discussions, read the daily news, shop for mail-order items, check the weather, stocks, sports scores, play games, and more.  

Prodigy's technology felt like a centralized, parallel universe Internet where technologies looked very, very similar to what we know now but were in fact fundamentally different.

Prodigy even devoted a portion of the user's screen to graphical banner ads. It was very much like a microcosm of the modern Internet—if the entire World Wide Web was published by a single company. Over its 11-year lifespan, a generation of Americans grew up with Prodigy as part of their shared cultural heritage. In an earlier era, we may have spoken about another common cultural experience—say, Buster Keaton films—as a cultural frame of reference for an entire generation. Everybody saw them, everybody referenced them. And while Prodigy was nowhere near as popular as Buster Keaton among the general public, hundreds of thousands of people with a computer and a modem in the early 1990s tried Prodigy at least once. What those early online explorers saw when they logged in was, to them, glorious: colors, fonts, illustrations, and a point-and-click interface—features which, at Prodigy's launch in 1988, were entirely new. Prior to Prodigy, competitors like CompuServe and GEnie forced users to type obtuse commands to get any meaningful result (and that result also happened to be a screen full of lifeless text).

Prodigy gained its distinctive flair from a now-forgotten graphical protocol called North American Presentation Level Protocol Syntax, or NAPLPS for short. NAPLPS was a product of the brief Teletext era of the late 1970s, when TV networks sought to piggyback extra digital information such as weather forecasts or sports scores using something called the "vertical blanking interval" of a TV broadcast signal. The vertical blanking interval could only hold a small amount of data, so engineers devised a way to present digital color graphics and text in the most economical way possible. NAPLPS did this by reducing an image into a set of mathematical instructions (i.e. "draw an oval at this location and fill it with blue") instead of storing data on every pixel in a bitmap image like JPEG or GIF files do today.

Screenshot of Where in the World is Carmen San Diego? on Prodigy circa 1988 (Prodigy)

The NAPLPS method required a custom piece of hardware or software, commonly called a "terminal" or "client," on the receiving end to receive the drawing instructions and to translate them into an image or page layout on the user's screen. Teletext never caught on in the US (although it did flourish in Europe), nor did Videotex, the two-way interactive version of the concept that required remote computers accessed by modem and corresponding terminals hooked to TV sets.

* * *

Coming on the heels of Videotex mania, which swept the Western world in the late 1970s and early 1980s, Sears, CBS, and IBM joined together in 1984 to craft a Videotex service of its own.  They called their partnership Trintex: "Tri" for the three companies, and "tex" for Videotex. The plan, as conceived from a corporate standpoint, was almost naively simple: the world's largest retailer (Sears) would provide online shopping. The world's largest media conglomerate (CBS) would provide content and information, and the world's largest computer company (IBM) would provide the underlying technology.

Kim Moser

How the trio got there, however, would turn out to be far more complicated. A very expensive technological effort (which, among other minor hiccups, required creating a nationwide proprietary telecommunications network with hundreds of nodes), would end up inadvertently crafting a consumer online world for the everyman that eerily presaged the Internet we know today—if in a Bizarro Superman type way.

Looking back, Prodigy's technology felt like a centralized, parallel universe Internet where technologies looked very, very similar to what we know now but are in fact fundamentally different—like lifting the hoods of two identical-looking cars and finding a diesel engine under one and a gasoline engine under the other. They both get you there, but in different ways. Even so, the similarities were close enough that patents, legal precedents, and online techniques forged from the Trintex and Prodigy partnerships still loom over the Internet in ways that few in the public understand.

To put the Trintex partnership in modern terms, it was as if Wal-Mart, Comcast, and Apple were to team up today and rewrite the rules of media distribution and general retail commerce. It's a terrifying prospect. But the online landscape back then was raw and rough, undefined and relatively new, so few feared a partnership from such a trinity of giants in 1984.

And, as giants are wont to do, it took them four years to bring the service to the market. Along the way, CBS dropped out, and Sears and IBM were left to fend alone. The remaining pair changed the name from "Trintex" to "Prodigy" to reflect not only the lack of a third partner, but also to reposition the company with a mass-market name that would appeal to the general public.

After launch in 1988, Prodigy soared in the consumer online space until it was handily surpassed in subscribership by AOL in the early-mid 1990s. Then, of course, it was wholly trampled in the late 1990s by that hungry, all-consuming digital maw called the Internet.

* * *

At the time it was finally shuttered, Prodigy was an absolute dinosaur technologically. Built from systems that were state-of-the art at the dawn of the 1980s, and existing on top of a complex and proprietary network infrastructure that was always separate from the Internet, Prodigy existed in spite of itself.

Eager to pivot entirely to its burgeoning ISP business, Prodigy's parent sought a convenient escape route from Prodigy Classic in the late 1990s. Subscribership to its Classic service had dwindled to 208,000—down from 1 million a few years earlier—and the infrastructure was costly to maintain. Conveniently citing the "Y2K problem," Prodigy's CEO, Samer Salameh, announced a Classic shutdown in early 1999.

The data saved to STAGE.DAT was frozen in time like a mosquito stuck in digital amber.

After that shutdown, loyal Prodigy customers, who had hung on to the bitter end, were suspicious about the stated reasons for the closing. And they were mad. Fifteen years later, we can now confirm that their suspicions were correct: "As far as I know, Prodigy Classic being shut down was not influenced by Y2K issues," recalls Doino, the Prodigy employee who actually pulled the plug on the service in 1999.

But even an insurmountable tide of customer goodwill cannot stop one of the most sacred laws of the free market: that unprofitable products, even if they happen to be one-of-a-kind repositories of digital human culture, eventually meet their end at the hands of a corporation that needs to make money to survive. When Prodigy Classic shut down, its servers entirely shifted over to servicing the ISP portion of Prodigy's business. The network of regional servers—called Prodigy Local Sites—was dismantled. The actual Prodigy Classic data became neglected, and its whereabouts are uncertain, although I'm trying my best to track it down.

Even if Prodigy's archives are found, various former Prodigy employees say the data is trapped behind a technological minefield of obsolete storage formats, protocols, programming languages, and computer systems. And, naturally, each one must be present and working in tandem to have any hope of ever accessing the information. In other words, to resurrect some of Prodigy, you'd have to make all of it work again.

"Numerous attorneys fighting patent cases have asked me about this," says Les Briney, Prodigy's former executive director of technology and architecture.  (Briney is describing IBM's existing portfolio of lucrative Prodigy-related patents, which apply to many parts of the modern Web.)  "My estimate is in the million dollar range to do this."

But plenty of things seem costly-to-impossible until you actually do them. Just ask Carpenter, the programmer who stumbled upon something stunning when he was tinkering with Prodigy client software. He’d uncovered some of the old graphical images he was after, but then he found an unexpected trove leading into the past: "Then I discovered STAGE.DAT."

Prodigy in 1992, as captured on a home video by the author's father. (Benj Edwards)

STAGE.DAT, as it turns out, was one of the Prodigy client's two cache files. These two files, CACHE.DAT  and STAGE.DAT, stored both temporarily and frequently used data on the user's machine to speed up page load times. (This same STAGE.DAT got Prodigy in trouble in the early 1990s when users discovered that it could contain fragments of data culled from their PCs. As it turns out, Prodigy's client was filling in "empty" portions of STAGE.DAT with random snippets of system memory. Users were convinced Prodigy was spying on them, uploading this data to its servers (it wasn't); Prodigy denied this and released a tool for the paranoid to zero out their STAGE.DAT files.)

Prodigy's entire architecture, in fact, was based on this caching system, with a central server at the hub in Yorktown Heights, New York, and hundreds of regional caching servers spread throughout the US. That way, server load was distributed, and loading times were minimized from the user's standpoint. Data would propagate from the top down. Whenever a Prodigy user called up a page, a portion of that data was ultimately downloaded to his machine, much in the way web browsers cache HTML and image data today.

So here's the key to Carpenter's discovery: Whenever a user last dialed into Prodigy before it shut down in 1999, the data saved to STAGE.DAT was frozen in time like a mosquito stuck in digital amber. Carpenter found a way to tap into that amber and extract the data. His series of Python programs reads through a previously used STAGE.DAT file, generates a list of pointers to the pages or object data contained within, then directs the Prodigy client to display them one at a time so he can take screenshots.

The fact that Carpenter has to go through such a Rube Goldbergian series of steps to get at these images is a result of Prodigy's complexity. As it turns out, the service didn't use the vanilla NAPLPS standard to render its graphics. Prodigy's graphically rich screens—most of which were hand-drawn by Prodigy's staff of artists—existed as a proprietary, object-oriented superset of NAPLPS. Different portions of any given Prodigy page, including graphics, text, and interactive elements existed as separate "objects" that the Prodigy client assembled onto the user's display.

Like the caching system, this object-oriented behavior was born out of both technological necessity and a desire to reduce business costs, explains Robert Filepp, one of the original engineers who designed Prodigy's backend technology. Prior to the birth of Prodigy, AT&T held a limited Videotex subscription service trial in New Jersey. During the trial, says Filepp, "[AT&T] had a game called 'Make a Monster' in which users could take out body parts and decide to put them together into different pieces." But the monsters didn't actually have separate body parts, data-wise. "The artists had to create every possible combination and permutation of content, one per page."

At the end of the trial, AT&T had almost as many people developing content as they did subscribers, since each and every new piece of information introduced on the service required an entirely different screenful of NAPLPS artwork. To avoid this problem, David Waks, Prodigy's former head of research and development—who could also be called the godfather of Prodigy's architecture—conceived of the object-oriented approach where different portions of the screen could be updated independently of each other.  But this was only one part of his greater plan.

In sketching out the specifications for Prodigy, Waks envisioned a world in which, instead of dumb terminals hooked up to TV sets, users of a Videotex service would use custom client software running on personal computers—a dramatic leap forward, conceptually. In order for the same interactive Videotex content to be displayed on multiple incompatible platforms, Waks imagined a sort of virtual machine environment that would execute an embedded programming language while also displaying resolution-independent vector graphics to that machine's best ability. So, not only to save artist manpower, but to save bandwidth, it was best to break each page into chunks that could be shuffled around to regional servers and cached, then fed into graphical templates hosted on users' machines.

Screenshot of a weather forecast on Prodigy circa 1988 (Prodigy)

With Waks' approach, later adopted by Prodigy with some modifications, every page became a package of objects: some of the objects could be programs, some of them could be text, and some of them could be graphics. It was up to the client to reassemble them in the correct manner. Even today, the vintage Prodigy "Reception System" client software (as Prodigy called it) is the key to unlocking the service's graphically rich pages. With 250,000 lines of C++ code created and modified by 30 engineers for over the course of a decade, the Reception System is very difficult to reverse engineer and duplicate. Carpenter, spry as he is, isn't up to that challenge yet. But his gears are turning.

"Some day I'd like to create something to emulate the Prodigy backend and serve up requested objects to the client," says Carpenter, who plans to use the existing Prodigy client as the interface. "Perhaps I could bring Prodigy back online like hobbyists have brought QuantumLink back."

We may see a day, thanks to the efforts of devoted hobbyists like Carpenter, where anyone on the Internet can view original Prodigy on the web. Imagine a Javascript-based Prodigy Reception System in the vein of Javascript MESS that runs in any browser and renders original Prodigy pages in all their original, dynamic glory. After all, each Prodigy page is not just a static screen, but a collection of potentially interactive objects.

A few limited simulations of Prodigy already exist, like this MadMaze recreation, hosted on my website. But to appreciate the full cultural value of what Prodigy brought to millions of users for over a decade, the public deserves to have continued access to the real thing.

To do so, Carpenter needs more data, and he's turning to the Internet for help.

"I need everything in people's old C:\PRODIGY directory," he says. "The whole thing zipped up, because a STAGE.DAT is meant to work with a specific version of a Reception System. The objects within it may use features only found in certain versions of the RS."

The Prodigy log-in screen as it looked in 1991, pulled from a STAGE.DAT file by Jim Carpenter. (Prodigy)

The key, as he mentioned, is to find the files from an existing Prodigy installation. Note that the Prodigy client installation disks themselves are not sufficient because they were never used to connect to the Prodigy service, and thus they do not contain downloaded content in the cache files. (If you have anything you think would help, email me.)

Carpenter also needs technical specifications and source code for the Reception System, which may be floating out there somewhere. Needless to say, finding working Prodigy installations from over 20 years ago is tricky. Luckily, I found one of my working Prodigy installations and gave it to Carpenter last year.  He managed to extract many of the Prodigy on-screen illustrations that you see accompanying this article.

* * *

As we invest more of our lives into the electronic realm, corporate decisions to shut down online services without recourse are beginning to resemble digital acts of Nero burning Rome—cultural history and entire communities are trashed in the process.

"From the beginning, online services have visualized what they do as some sort of product, but this is wrong," says Jason Scott, an archivist at the Internet Archive. "They have become critical public spaces and libraries of community speech. In some cases, you can have hundreds of thousands or even millions of people represented in this data."

We, as members of the public, have to be vigilant about how our commercially-driven society treats our shared cultural heritage. As we've seen from previous shutterings of services like Geocities, the world of business can seem painfully callous and indifferent to the needs of history, and that seems unlikely to change any time soon. "I am skeptical that these businesses will spontaneously decide to think in terms of archives and long-term preservation," says Scott. "It's just not in their DNA. Legacy is a luxury to the modern business."

Ultimately, it's up to us, the public, to save what we can.
13 Jul 18:23

Lessons From Brazil's War on Poverty

Brazil is a giant when it comes to soccer. In the late 1990s, it was a giant in another area, this one much less desirable: Brazil had one of the highest levels of income inequality in the world, as home to some of the world’s poorest people, while its richest competed with the wealthiest in the United States and elsewhere.1

In 2001, Brazil’s Gini coefficient — the most common (but not necessarily most attractive) measure of inequality2 — hovered around 0.60, a very high figure by any standard. (A Gini coefficient of 0 represents perfect equality where everyone earns the same income, and 1 represents complete inequality where all the country’s income accrues to a single person.) By comparison, the U.S. — not exactly a bastion of equality — had a Gini coefficient of 0.4 in 2000.3

ozler-feature-brazilincome

But from 2001 to 2007, income inequality in Brazil started to decline at an unprecedented rate: The Gini coefficient fell from above 0.60 to below 0.55, reaching its lowest level in more than 30 years. The incomes of the poorest tenth of Brazilians grew by 7 percent per year, nearly three times the national average of 2.5 percent. In less than a decade, Brazil had managed to cut the proportion of its population living in extreme poverty in half.4

This sharp decline coincided with the introduction of Brazil’s first cash transfer programs in 2001. Created to reduce poverty in the short-run, these programs also provided incentives to households to invest in their children’s education, health and nutrition. Brazil was following on the success of Mexico, which a couple of years earlier had introduced PROGRESA, perhaps the world’s best-known and most influential conditional cash transfer program.5 Brazil consolidated its programs into one program, called Bolsa Familia, in 2003.

Bolsa Familia targeted households whose per capita monthly income was less than 120 reais (a yearly income of $828). The government paid these households between 20 to 182 reais per month (between $132 to $1,248 a year) if they met certain conditions: Children under the age of 17 had to regularly attend school; pregnant women had to visit clinics for prenatal and antenatal care; and parents needed to make sure their children were fully immunized by age 5 and received growth check-ups until age 6. It also provided a small allocation to extremely poor households with no strings attached. By 2010, Bolsa Familia had grown to one of the world’s largest conditional cash transfer programs, providing 40 billion reais (about $24 billion) to nearly 50 million people, about a quarter of Brazil’s population.

So what role did Bolsa Familia play in the decline of inequality in Brazil since 2000? With such a large transfer of money from taxpayers to Brazil’s poorest, you’d imagine there must have been some impact, but how much of one? Identifying the causal effects of large, nationwide government programs is challenging. Many factors can affect the distribution of income over time. Shifting demographics, the changing nature of work, and women’s participation in the labor force can all affect income inequality. If you wanted to truly isolate Bolsa Familia’s effect, you could theoretically conduct an experiment — not unlike the trials that pharmaceutical companies routinely do to test a drug’s effectiveness — where you’d randomly assign some communities and not others to the cash transfer program, and then compare inequality between them.

However, this type of social experiment is hard, if not impossible, for governments to conduct for a long period of time. For example, Mexico did randomly assign some eligible communities to PROGRESA while withholding the benefits from other (equally eligible) communities at the start, but this pilot phase lasted only 18 months, after which the program was rolled out to all eligible areas. An 18-month period might have been sufficient to evaluate the effects of the program on children’s school attendance and women’s visits to health clinics, but it was too short a period to evaluate the program’s longer-term impacts on poverty and inequality. In any case, researcher Gala Diaz Langou says that leaving some areas out of the program was not politically feasible in Brazil, so there was no such experimentation with Bolsa Familia.6

So if you can’t do a randomized trial, what can you do to assess the program’s effect on Brazil’s drop in income inequality? Economists often try to understand changes in income inequality by quantifying all the elements that affect the distribution of income, such as the proportion of adults who work, the number of hours they work, their hourly wages, whether they have income from other assets, and whether they’re receiving money from the government. Once income is broken down by source at a given point in time, researchers can try to isolate the role of each source in changes in the distribution of income by keeping that factor constant over time and allowing all the remaining factors to vary. While this approach doesn’t identify the causal effect of any one factor on changes in a country’s Gini coefficient, it’s still a useful accounting exercise — helpful in focusing on the main factors associated with the changes in the distribution of incomes.7

Using this approach, two studies — a 2010 paper on Brazil (by Ricardo Barros and co-authors from Brazil’s Institute of Applied Economic Research) and a 2013 paper on a number of countries in Latin America including Brazil (by the World Bank’s Joao Pedro Azevedo and co-authors) — have separately found that government transfers accounted for about 40 percent of the decline in inequality in Brazil, with expansions in pensions and Bolsa Familia (and a related program for people with disabilities) contributing roughly equally to the decline in income inequality. However, of these government transfers, Bolsa Familia was by far the most important component in raising the income levels of Brazil’s poorest households: Between 2001 and 2007, the share of people receiving these conditional cash transfer payments increased by more than 10 percentage points, from 6.5 percent to 16.9 percent. This accounted for the entire increase in the share of households that received non-labor income (i.e. income from sources outside of working a job).

Hence, available estimates suggest that Bolsa Familia contributed about 15 to 20 percent of the decline in income inequality during the decade starting in 2000. These effects were most likely achieved by putting money directly into the pockets of poor households.8 Because the money is tied to parents’ investing more in their children’s health and education, advocates of the program hope these cash transfers will not only reduce poverty in real time, but keep the next generation from poverty as well. And it appears Bolsa Familia may also have had some success in this respect: Paul Glewwe of the University of Minnesota and Ana Lucia Kassouf of the University of Sao Paulo found in 2012 that the program has led to improvements in children’s school enrollment and advancement, which could translate into higher incomes for them as adults and further reductions in poverty and inequality.

But if Bolsa Familia only accounted for 15 to 20 percent of the drop in income inequality in Brazil, what contributed the most? The same two studies agree that rising wages among the poor were the main driver of the decline in inequality in Brazil. While their methodologies differ slightly, the studies show that changes in labor income accounted for 55 to 60 percent of the drop in income inequality.

And why did wages for the poor rise? Even before Bolsa Familia, the Brazilian government adopted policies that expanded access to education: Between 1995 and 2005, the average schooling among workers increased by almost two years. At the same time, the hourly wages for a worker with a given level of education rose much faster among the poor than the rest of the population, likely due to the increased demand for low-skilled labor that accompanied the commodity and price booms experienced in Brazil, and Latin America more generally, according to research by Leonardo Gasparini of the National University of La Plata in Argentina and co-authors. So, a combination of public policy (expansion of access to education and government transfers to the poor) and favorable market factors (rising wages for low-skilled workers) led to substantial declines in inequality in Brazil.

Income inequality in Brazil and Latin America remains high. Barros and his co-authors estimate that almost two more decades of similar progress is needed to bring income inequality in Brazil down to the world average.9 Expanding cash transfer programs like Bolsa Familia might be tough for the government, particularly in periods of tighter budgets. However, experimentation with these programs’ design (in Brazil and elsewhere) — for example, expanding Bolsa Familia benefits instead of pursuing continued increases in pensions for older Brazilians10 — can allow governments to maximize impacts while keeping a lid on program budgets.

11 Jul 03:27

A Breakthrough in Our Understanding of How Intelligence Evolves

Pedro

From TheOldReader.

It's hard to study intelligence in humans — our cultures are incredibly complex, and what counts as "smart" is defined as much by our societies as it is by our genes. So some researchers have turned to chimpanzees to understand what actually gives rise to intelligence in the brain.

A century of scientific investigation into human intelligence has revealed that genes do indeed play an important role in its development, but that cultural and experiential factors can also exert a great amount of control. Intelligence, or IQ – the score on one of several popular, standardized measurements of intelligence – can be modified by an individual's socioeconomic status, for example. A child's early life experiences, such as abusive or neglectful parenting, also impact intelligence. As with most things in psychology, understanding the complex dance of genes and environment becomes confusing, especially since genes and environment can themselves become correlated. One popular way to eliminate those complications is to study animals.

Early Approaches to Animal Intelligence

Animal intelligence has long been of interest to psychologists, ethologists, and anthropologists, but until fairly recently the overwhelming majority of studies has taken a behaviorist approach. It's an approach that largely eschews biology in favor of experience. Think: Skinner and Watson.

It has only been in the last couple decades that animal behavior researchers have begun to think about the socio-biological factors that contribute to animal behavior and cognition, and even more recently that researchers have begun to think seriously about individual differences among animals when it comes to intelligence and cognition.

Now that measurements for non-human intelligence have started to catch up with those that have been developed for understanding human intelligence, researchers are in a position to turn to animals in order to better understand the development of intelligence. Just how important are genes? How much can experience really push things around?

Genes + Environment = Intelligence

Researchers William D. Hopkins, Jamie L. Russell, and Jennifer Schaeffer from Georgia State University and the Yerkes National Primate Research Center in Atlanta turned to chimpanzees. They administered the Primate Cognition Test Battery to 99 adult chimpanzees ranging in age from 9 to 54 for whom they also had genetic data and information on the relatedness among each individual.

The battery is comprised of thirteen tasks, which measure spatial cognition, numerical cognition, causality, and social cognition. The tasks are very straightforward. One of the tests for measuring spatial cognition, for example, involves a researcher hiding food in two of three cups. If the chimpanzee has good spatial memory, it should look for food in those two cups, rather than in the third, empty cup.

After combining the chimpanzees' performance on the IQ test with their genetic data, the researchers discovered that fifty percent of the variation in intelligence was due to genetic factors. When it came to specific environmental factors that the researchers considered, neither sex nor rearing history contributed significantly to the chimps' intelligence. That is, whether they were raised by humans or by their mothers did not significantly impact their intelligence once they became adults.

The researchers then broke the chimpanzees' performance down by skillset, and discovered that while all aspects of intelligence had some variation that could be attributed to genetics, variation in spatial and social communication skills were in particular influenced by their genes. (Variation in the chimps' understanding of causality reflected fairly little genetic influence.)

Impressively, the researchers managed to re-test 86 of the original 99 chimpanzees after some time had passed. Not only did the overall measure of intelligence heritability hold up the second time, but the individual tasks sorted in a nearly identical pattern, with performance on tasks related to spatial and social cognition dividing into two clusters. Both the structure and heritability of intelligence held up across two different assessments of the same group of chimps. In humans, intelligence is thought to remain fairly stable over time. The same appears to be true for chimpanzees as well.

A Useful Comparison

Hopkins, Russell, and Schaeffer write:

From an evolutionary standpoint, the results reported here suggest that genetic factors play a significant role in determining individual variation in cognitive abilities, particularly for spatial cognition and communication skills. Presumably, these attributes would have conferred advantages to some individuals, perhaps in terms of enhanced foraging skills or increased social skills, leading to increased opportunities for access to food or mating.

That isn't a particularly surprising or novel statement on its own. We already knew that genes have an important job when it comes to intelligence and cognition. But what's useful is that we can assume chimpanzee intelligence isn't influenced by factors like socioeconomic status, the quality of their school districts, or any of the dozens of other variables, both obvious and subtle, that influence human development. That means we can examine the "genetic" side of their intelligence more easily.

With a basic understanding of the proportion of cognition that can be attributed to genetics, researchers now have a place to start if they want to use the evolution of chimpanzee smarts as a comparison point for our own.

[Current Biology]

Header photo: Thomas Lersch/Wikimedia Commons

04 Jul 14:36

Visualizing Algorithms

Pedro

<3

The power of the unaided mind is highly overrated… The real powers come from devising external aids that enhance cognitive abilities. Donald Norman

Algorithms are a fascinating use case for visualization. To visualize an algorithm, we don’t merely fit data to a chart; there is no primary dataset. Instead there are logical rules that describe behavior. This may be why algorithm visualizations are so unusual, as designers experiment with novel forms to better communicate. This is reason enough to study them.

But algorithms are also a reminder that visualization is more than a tool for finding patterns in data. Visualization leverages the human visual system to augment human intellect: we can use it to better understand these important abstract processes, and perhaps other things, too. This is an adaption of my talk at Eyeo 2014. A video of the talk will be available soon. (Thanks, Eyeo folks!)

#Sampling

Before I can explain the first algorithm, I first need to explain the problem it addresses.

Van Gogh’s The Starry Night

Light — electromagnetic radiation — the light emanating from this screen, traveling through the air, focused by your lens and projected onto the retina — is a continuous signal. To be perceived, we must reduce light to discrete impulses by measuring its intensity and frequency distribution at different points in space.

This reduction process is called sampling, and it is essential to vision. You can think of it as a painter applying discrete strokes of color to form an image (particularly in Pointillism or Divisionism). Sampling is further a core concern of computer graphics; for example, to rasterize a 3D scene by raytracing, we must determine where to shoot rays. Even resizing an image requires sampling.

Sampling is made difficult by competing goals. On the one hand, samples should be evenly distributed so there are no gaps. But we must also avoid repeating, regular patterns, which cause aliasing. This is why you shouldn’t wear a finely-striped shirt on camera: the stripes resonate with the grid of pixels in the camera’s sensor and cause Moiré patterns.

Photo: retinalmicroscopy.com

This micrograph is of the human retina’s periphery. The larger cone cells detect color, while the smaller rod cells improve low-light vision.

The human retina has a beautiful solution to sampling in its placement of photoreceptor cells. The cells cover the retina densely and evenly (with the exception of the blind spot over the optic nerve), and yet the cells’ relative positions are irregular. This is called a Poisson-disc distribution because it maintains a minimum distance between cells, avoiding occlusion and thus wasted photoreceptors.

Unfortunately, creating a Poisson-disc distribution is hard. (More on that in a bit.) So here’s a simple approximation known as Mitchell’s best-candidate algorithm.

Best-candidate

You can see from these dots that best-candidate sampling produces a pleasing random distribution. It’s not without flaws: there are too many samples in some areas (oversampling), and not enough in other areas (undersampling). But it’s reasonably good, and just as important, easy to implement.

Here’s how it works:

Best-candidate

For each new sample, the best-candidate algorithm generates a fixed number of candidates, shown in gray. (Here, that number is 10.) Each candidate is chosen uniformly from the sampling area.

The best candidate, shown in red, is the one that is farthest away from all previous samples, shown in black. The distance from each candidate to the closest sample is shown by the associated line and circle: notice that there are no other samples inside the gray or red circles. After all candidates are created and distances measured, the best candidate becomes the new sample, and the remaining candidates are discarded.

Now here’s the code:

function sample() {
  var bestCandidate, bestDistance = 0;
  for (var i = 0; i  bestDistance) {
      bestDistance = d;
      bestCandidate = c;
    }
  }
  return bestCandidate;
}

As I explained the algorithm above, I will let the code stand on its own. (And the purpose of this essay is to let you study code through visualization, besides.) But I will clarify a few details:

The external numCandidates defines the number of candidates to create per sample. This parameter lets you trade-off speed with quality. The lower the number of candidates, the faster it runs. Conversely, the higher the number of candidates, the better the sampling quality.

The distance function is simple geometry:

function distance(a, b) {
  var dx = a[0] - b[0],
      dy = a[1] - b[1];
  return Math.sqrt(dx * dx + dy * dy);
}
You can omit the sqrt here, if you want, since it’s a monotonic function and doesn’t change the determination of the best candidate.

The findClosest function returns the closest sample to the current candidate. This can be done by brute force, iterating over every existing sample. Or you can accelerate the search, say by using a quadtree. Brute force is simple to implement but very slow (quadratic time, in O-notation). The accelerated approach is much faster, but more work to implement.

Speaking of trade-offs: when deciding whether to use an algorithm, we evaluate it not in a vacuum but against other approaches. And as a practical matter it is useful to weigh the complexity of implementation — how long it takes to implement, how difficult it is to maintain — against its performance and quality.

The simplest alternative is uniform random sampling:

function sample() {
  return [random() * width, random() * height];
}

It looks like this:

Uniform random

Uniform random is pretty terrible. There is both severe under- and oversampling: many samples are densely-packed, even overlapping, leading to large empty areas. (Uniform random sampling also represents the lower bound of quality for the best-candidate algorithm, as when the number of candidates per sample is set to one.)

Dots patterns are one way of showing sample pattern quality, but not the only way. For example, we can attempt to simulate vision under different sampling strategies by coloring an image according to the color of the closest sample. This is, in effect, a Voronoi diagram of the samples where each cell is colored by the associated sample.

What does The Starry Night look like through 6,667 uniform random samples?

Uniform random

Hold down the mouse to compare to the original.

The lackluster quality of this approach is again apparent. The cells vary widely in size, as expected from the uneven sample distribution. Detail has been lost because densely-packed samples (small cells) are underutilized. Meanwhile, sparse samples (large cells) introduce noise by exaggerating rare colors, such as the pink star in the bottom-left.

Now observe best-candidate sampling:

Best-candidate

Hold down the mouse to compare to the original.

Much better! Cells are more consistently sized, though still randomly placed. Despite the quantity of samples (6,667) remaining constant, there is substantially more detail and less noise thanks to their even distribution. If you squint, you can almost make out the original brush strokes.

We can use Voronoi diagrams to study sample distributions more directly by coloring each cell according to its area. Darker cells are larger, indicating sparse sampling; lighter cells are smaller, indicating dense sampling. The optimal pattern has nearly-uniform color while retaining irregular sample positions. (A histogram showing cell area distribution would also be nice, but the Voronoi has the advantage that it shows sample position simultaneously.)

Here are the same 6,667 uniform random samples:

Uniform random

The black spots — large gaps between samples — would be localized deficiencies in vision due to undersampling. The same number of best-candidate samples exhibits much less variation in cell area, and thus more consistent coloration:

Best-candidate

Can we do better than best-candidate? Yes! Not only can we produce a better sample distribution with a different algorithm, but this algorithm is faster (linear time). It’s at least as easy to implement as best-candidate. And this algorithm even scales to arbitrary dimensions.

This wonder is called Bridson’s algorithm for Poisson-disc sampling, and it looks like this:

Poisson-disc

This algorithm functions visibly differently than the other two: it builds incrementally from existing samples, rather than scattering new samples randomly throughout the sample area. This gives its progression a quasi-biological appearance, like cells dividing in a petri dish. Notice, too, that no samples are too close to each other; this is the minimum-distance constraint that defines a Poisson-disc distribution, enforced by the algorithm.

Here’s how it works:

Poisson-disc

Red dots represent “active” samples. At each iteration, one is selected randomly from the set of all active samples. Then, some number of candidate samples (shown as hollow black dots) are randomly generated within an annulus surrounding the selected sample. The annulus extends from radius r to 2r, where r is the minimum-allowable distance between samples.

Candidate samples within distance r from an existing sample are rejected; this “exclusion zone” is shown in gray, along with a black line connecting the rejected candidate to the nearby existing sample. A grid accelerates the distance check for each candidate. The grid size r/√2 ensures each cell can contain at most one sample, and only a fixed number of neighboring cells need to be checked.

If a candidate is acceptable, it is added as a new sample, and a new active sample is randomly selected. If none of the candidates are acceptable, the selected active sample is marked as inactive (changing from red to black). When no samples remain active, the algorithm terminates.

The area-as-color Voronoi diagram shows Poisson-disc sampling’s improvement over best-candidate, with no dark-blue or light-yellow cells:

Poisson-disc

The Starry Night under Poisson-disc sampling retains the greatest amount of detail and the least noise. It is reminscent of a beautiful Roman mosaic:

Poisson-disc

Hold down the mouse to compare to the original.

Now that you’ve seen a few examples, let’s briefly consider why to visualize algorithms.

Entertaining - I find watching algorithms endlessly fascinating, even mesmerizing. Particularly so when randomness is involved. And while this may seem a weak justification, don’t underestimate the value of joy! Further, while these visualizations can be engaging even without understanding the underlying algorithm, grasping the importance of the algorithm can give a deeper appreciation.

Teaching - Did you find the code or the animation more helpful? What about pseudocode — that euphemism for code that won’t compile? While formal description has its place in unambiguous documentation, visualization can make intuitive understanding more accessible.

Debugging - Have you ever implemented an algorithm based on formal description? It can be hard! Being able to see what your code is doing can boost productivity. Visualization does not supplant the need for tests, but tests are useful primarily for detecting failure and not explaining it. Visualization can also discover unexpected behavior in your implementation, even when the output looks correct. (See Bret Victor’s Learnable Programming and Inventing on Principle for excellent related work.)

Learning - Even if you just want to learn for yourself, visualization can be a great way to gain deep understanding. Teaching is one of the most effective ways of learning, and implementing a visualization is like teaching yourself. I find it easier to remember an algorithm intuitively, having seen it, than to memorize code where I am bound to forget small but essential details.

#Shuffling

Shuffling is the process of rearranging an array of elements randomly. For example, you might shuffle a deck of cards before dealing a poker game. A good shuffling algorithm is unbiased, where every ordering is equally likely.

The Fisher–Yates shuffle is an optimal shuffling algorithm. Not only is it unbiased, but it runs in linear time, uses constant space, and is easy to implement.

function shuffle(array) {
  var n = array.length, t, i;
  while (n) {
    i = Math.random() * n-- | 0; // 0 ≤ i 

Above is the code, and below is a visual explanation:

For a more detailed explanation of this algorithm, see my post on the Fisher–Yates shuffle.

Each line represents a number. Small numbers lean left and large numbers lean right. (Note that you can shuffle an array of anything — not just numbers — but this visual encoding is useful for showing the order of elements. It is inspired by Robert Sedgwick’s sorting visualizations in Algorithms in C.)

The algorithm splits the array into two parts: the right side of the array (in black) is the shuffled section, while the left side of the array (in gray) contains elements remaining to be shuffled. At each step it picks a random element from the left and moves it to the right, thereby expanding the shuffled section by one. The original order on the left does not need to be preserved, so to make room for the new element in the shuffled section, the algorithm can simply swap the element into place. Eventually all elements are shuffled, and the algorithm terminates.

If Fisher–Yates is a good algorithm, what does a bad algorithm look like? Here’s one:

// DON’T DO THIS!
function shuffle(array) {
  return array.sort(function(a, b) {
    return Math.random() - .5; // ಠ_ಠ
  });
}

This approach uses sorting to shuffle by specifying a random comparator function. A comparator defines the order of elements. It takes arguments a and b — two elements from the array to compare — and returns a value less than zero if a is less than b, a value greater than zero if a is greater than b, or zero if a and b are equal. The comparator is invoked repeatedly during sorting. If you don’t specify a comparator to array.sort, elements are ordered lexicographically.

Here the comparator returns a random number between -.5 and +.5. The assumption is that this defines a random order, so sorting will jumble the elements randomly and perform a good shuffle.

Unfortunately, this assumption is flawed. A random pairwise order (for any two elements) does not establish a random order for a set of elements. A comparator must obey transitivity: if a > b and b > c, then a > c. But the random comparator returns a random value, violating transitivity and causing the behavior of array.sort to be undefined! You might get lucky, or you might not.

How bad is it? We can try to answer this question by visualizing the output:

Another reason this algorithm is bad is that sorting takes O(n lg n) time, making it significantly slower than Fisher–Yates which takes O(n). But speed is less damning than bias.

This may look random, so you might be tempted to conclude that random comparator shuffle is adequate, and dismiss concerns of bias as pedantic. But looks can be misleading! There are many things that appear random to the human eye but are substantially non-random.

This deception demonstrates that visualization is not a magic wand. Showing a single run of the algorithm does not effectively assess the quality of its randomness. We must instead carefully design a visualization that addresses the specific question at hand: what is the algorithm’s bias?

To show bias, we must first define it. One definition is based on the probability that an array element at index i prior to shuffling will be at index j after shuffling. If the algorithm is unbiased, every element has equal probability of ending up at every index, and thus the probability for all i and j is the same: 1/n, where n is the number of elements.

Computing these probabilities analytically is difficult, since it depends on knowing the exact sorting algorithm used. But computing them empirically is easy: we simply shuffle thousands of times and count the number of occurrences of element i at index j. An effective display for this matrix of probabilities is a matrix diagram:

SHUFFLE BIAS
column = index before shuffle
row = index after shuffle
green = positive bias
red = negative bias

The column (horizontal position) of the matrix represents the index of the element prior to shuffling, while the row (vertical position) represents the index of the element after shuffling. Color encodes probability: green cells indicate positive bias, where the element occurs more frequently than we would expect for an unbiased algorithm; likewise red cells indicate negative bias, where it occurs less frequently than expected.

Random comparator shuffle in Chrome, shown above, is surprisingly mediocre. Parts of the array are only weakly biased. However, it exhibits a strong positive bias below the diagonal, which indicates a tendency to push elements from index i to i+1 or i+2. There is also strange behavior for the first, middle and last row, which might be a consequence of Chrome using median-of-three quicksort.

The unbiased Fisher–Yates algorithm looks like this:

No patterns are visible in this matrix, other than a small amount of noise due to empirical measurement. (That noise could be reduced if desired by taking additional measurements.)

The behavior of random comparator shuffle is heavily dependent on your browser. Different browsers use different sorting algorithms, and different sorting algorithms behave very differently with (broken) random comparators. Here’s random comparator shuffle on Firefox:

For an interactive version of these matrix diagrams to test alternative shuffling strategies, see Will It Shuffle?

This is egregiously biased! The resulting array is often barely shuffled, as shown by the strong green diagonal in this matrix. This does not mean that Chrome’s sort is somehow “better” than Firefox’s; it simply means you should never use random comparator shuffle. Random comparators are fundamentally broken.

#Sorting

Sorting is the inverse of shuffling: it creates order from disorder, rather than vice versa. This makes sorting a harder problem with diverse solutions designed for different trade-offs and constraints.

One of the most well-known sorting algorithms is quicksort.

Quicksort

Quicksort first partitions the array into two parts by picking a pivot. The left part contains all elements less than the pivot, while the right part contains all elements greater than the pivot. After the array is partitioned, quicksort recurses into the left and right parts. When a part contains only a single element, recursion stops.

The partition operation makes a single pass over the active part of the array. Similar to how the Fisher–Yates shuffle incrementally builds the shuffled section by swapping elements, the partition operation builds the lesser (left) and greater (right) parts of the subarray incrementally. As each element is visited, if it is less than the pivot it is swapped into the lesser part; if it is greater than the pivot the partition operation moves on to the next element.

Here’s the code:

function quicksort(array, left, right) {
  if (left > 1;
    pivot = partition(array, left, right, pivot);
    quicksort(array, left, pivot);
    quicksort(array, pivot + 1, right);
  }
}

function partition(array, left, right, pivot) {
  var pivotValue = array[pivot];
  swap(array, pivot, --right);
  for (var i = left; i 

There are many variations of quicksort. The one shown above is one of the simplest — and slowest. This variation is useful for teaching, but in practice more elaborate implementations are used for better performance.

A common improvement is “median-of-three” pivot selection, where the median of the first, middle and last elements is used as the pivot. This tends to choose a pivot closer to the true median, resulting in similarly-sized left and right parts and shallower recursion. Another optimization is switching from quicksort to insertion sort for small parts of the array, which can be faster due to the overhead of function calls. A particularly clever variation is Yaroslavskiy’s dual-pivot quicksort, which partitions the array into three parts rather than two. This is the default sorting algorithm in Java and Dart.

The sort and shuffle animations above have the nice property that time is mapped to time: we can simply watch how the algorithm proceeds. But while intuitive, animation can be frustrating to watch, especially if we want to focus on an occasional weirdness in the algorithm’s behavior. Animations also rely heavily on our memory to observe patterns in behavior. While animations are improved by controls to pause and scrub time, static displays that show everything at once can be even more effective. The eye scans faster than the hand.

A simple way of turning an animation into a static display is to pick key frames from the animation and display those sequentially, like a comic strip. If we then remove redundant information across key frames, we use space more efficiently. A denser display may require more study to understand, but is faster to scan since the eye travels less.

Below, each row shows the state of the array prior to recursion. The first row is the initial state of the array, the second row is the array after the first partition operation, the third row is after the first partition’s left and right parts are again partitioned, etc. In effect, this is breadth-first quicksort, where the partition operation on both left and right proceeds in parallel.

Quicksort

As before, the pivots for each partition operation are highlighted in red. Notice that the pivots turn gray at the next level of recursion: after the partition operation completes, the associated pivot is in its final, sorted position. The total depth of the display — the maximum depth of recursion — gives a sense of how efficiently quicksort performed. It depends heavily on input and pivot choice.

Another static display of quicksort, less dense but perhaps easier to read, represents each element as a colored thread and shows each sequential swap. (This form is inspired by Aldo Cortesi’s sorting visualizations.) Smaller values are lighter, and larger values are darker.

At the start of each partition, the pivot is moved to the end (the right) of the active subarray.

Partitioning then proceeds from left to right. At each step, a new element is added either of the set of lesser values (in which case a swap occurs) or to the set of greater values (in which case no swap occurs).

When a swap occurs, the left-most value greater than the pivot is moved to the right, so as to make room on the left for the new lesser value. Thus, notice that in all swap operations, only values darker than the pivot move right, and only values lighter than the pivot move left.

When the partition operation has visited all elements in the array, the pivot is placed in its final position between the two parts. Then, the algorithm recurses into the left part, followed by the right part (far below). This visualization doesn’t show the state of the stack, so it can appear to jump around arbitrarily due to the nature of recursion. Still, you can typically see when a partition operation finishes due to the characteristic movement of the pivot to the end of the active subarray. Quicksort

You’ve now seen three different visual representations of the same algorithm: an animation, a dense static display, and a sparse static display. Each form has strengths and weaknesses. Animations are fun to watch, but static visualizations allow close inspection without being rushed. Sparse displays may likely easier to understand, but dense displays show the “macro” view of the algorithm’s behavior in addition to its details.

Before we move on, let’s contrast quicksort with another well-known sorting algorithm: mergesort.

function mergesort(array) {
  var n = array.length, a0 = array, a1 = new Array(n);
  for (var m = 1; m = end || a0[i0] 

Again, above is the code and below is an animation:

Mergesort

As you’ve likely surmised from either the code or the animation, mergesort takes a very different approach to sorting than quicksort. Unlike quicksort, which operates in-place by performing swaps, mergesort requires an extra copy of the array. This extra space is used to merge sorted subarrays, combining the elements from pairs of subarrays while preserving order. Since mergesort performs copies instead of swaps, we must modify the animation accordingly (or risk misleading readers).

Mergesort works from the bottom-up. Initially, it merges subarrays of size one, since these are trivially sorted. Each adjacent subarray — at first, just a pair of elements — is merged into a sorted subarray of size two using the extra array. Then, each adjacent sorted subarray of size two is merged into a sorted subarray of size four. After each pass over the whole array, mergesort doubles the size of the sorted subarrays: eight, sixteen, and so on. Eventually, this doubling merges the entire array and the algorithm terminates.

Because mergesort performs repeated passes over the array rather than recursing like quicksort, and because each pass doubles the size of sorted subarrays regardless of input, it is easier to design a static display. We simply show the state of the array after each pass.

Mergesort

Let’s again take a moment to consider what we’ve seen. The goal here is to study the behavior of an algorithm rather than a specific dataset. Yet there is still data, necessarily — the data is derived from the execution of the algorithm. And this means we can use the type of derived data to classify algorithm visualizations.

Level 0 / black box - The simplest class just shows the output. This does not explain the algorithm’s operation, but it can still verify correctness. And by treating the algorithm as a black box, you can more easily compare outputs of different algorithms. Black box visualizations can also be combined with deeper analysis of output, such as the shuffle bias matrix diagram shown above.

Level 1 / gray box - Many algorithms (though not all) build up output incrementally. By visualizing the intermediate output as it develops, we start to see how the algorithm works. This explains more without introducing new abstraction, since the intermediate and final output share the same structure. Yet this type of visualization can raise more questions than it answers, since it offers no explanation as to why the algorithm does what it does.

Level 2 / white box - To answer “why” questions, white box visualizations expose the internal state of the algorithm in addition to its intermediate output. This type has the greatest potential to explain, but also the highest burden on the reader, as the meaning and purpose of internal state must be clearly described. There is a risk that the additional complexity will overwhelm the reader; layering information may make the graphic more accessible. Lastly, since internal state is highly-dependent on the specific algorithm, this type of visualization is often unsuitable for comparing algorithms.

There’s also the practical matter of implementating algorithm visualizations. Typically you can’t just run code as-is; you must instrument it to capture state for visualization. (View source on this page for examples.) You may even need to interleave execution with visualization, which is particularly challenging for recursive algorithms that capture state on the stack. Language parsers such as Esprima may facilitate algorithm visualization through code instrumentation, cleanly separating execution code from visualization code.

#Maze Generation

The last problem we’ll look at is maze generation. All algorithms in this section generate a spanning tree of a two-dimensional rectangular grid. This means there are no loops and there is a unique path from the root in the bottom-left corner to every other cell in the maze.

I apologize for the esoteric subject — I don’t know enough to say why these algorithms are useful beyond simple games, and possibly something about electrical networks. But even so, they are fascinating from a visualization perspective because they solve the same, highly-constrained problem in wildly-different ways.

And they’re just fun to watch.

Random traversal

The random traversal algorithm initializes the first cell of the maze in the bottom-left corner. The algorithm then tracks all possible ways by which the maze could be extended (shown in red). At each step, one of these possible extensions is picked randomly, and the maze is extended as long as this does not reconnect it with another part of the maze.

Like Bridon’s Poisson-disc sampling algorithm, random traversal maintains a frontier and randomly selects from that frontier to expand. Both algorithms thus appear to grow organically, like a fungus.

Randomized depth-first traversal follows a very different pattern:

Randomized depth-first traversal

Rather than picking a new random passage each time, this algorithm always extends the deepest passage — the one with the longest path back to the root — in a random direction. Thus, randomized depth-first traversal only branches when the current path dead-ends into an earlier part of the maze. To continue, it backtracks until it can start a new branch. This snake-like exploration leads to mazes with significantly fewer branches and much longer, winding passages.

Prim’s algorithm constructs a minimum spanning tree, a spanning tree of a graph with weighted edges with the lowest total weight. This algorithm can be used to construct a random spanning tree by initializing edge weights randomly:

Randomized Prim’s

At each step, Prim’s algorithm extends the maze using the lowest-weighted edge (potential direction) connected to the existing maze. If this edge would form a loop, it is discarded and the next-lowest-weighted edge is considered.

Prim’s algorithm is commonly implemented using a heap, which is an efficient data structure for prioritizing elements. When a new cell is added to the maze, connected edges (shown in red) are added to the heap. Despite edges being added in arbitrary order, the heap allows the lowest-weighted edge to be quickly removed.

Lastly, a most unusual specimen:

Wilson’s

Wilson’s algorithm uses loop-erased random walks to generate a uniform spanning tree — an unbiased sample of all possible spanning trees. The other maze generation algorithms we have seen lack this beautiful mathematical property.

The algorithm initializes the maze with an arbitrary starting cell. Then, a new cell is added to the maze, initiating a random walk (shown in red). The random walk continues until it reconnects with the existing maze (shown in white). However, if the random walk intersects itself, the resulting loop is erased before the random walk continues.

Initially, the algorithm can be frustratingly slow to watch, as early random walks are unlikely to reconnect with the small existing maze. As the maze grows, random walks become more likely to collide with the maze and the algorithm accelerates dramatically.

These four maze generation algorithms work very differently. And yet, when the animations end, the resulting mazes are difficult to distinguish from each other. The animations are useful for showing how the algorithm works, but fail to reveal the resulting tree structure.

A way to show structure, rather than process, is to flood the maze with color:

Random traversal

Color encodes tree depth — the length of the path back to the root in the bottom-left corner. The color scale cycles as you get deeper into the tree; this is occasionally misleading when a deep path circles back adjacent to a shallow one, but the higher contrast allows better differentiation of local structure. (This is not a convential rainbow color scale, which is nominally considered harmful, but a cubehelix rainbow with improved perceptual properties.)

We can further emphasize the structure of the maze by subtracting the walls, reducing visual noise. Below, each pixel represents a path through the maze. As above, paths are colored by depth and color floods deeper into the maze over time.

Random traversal

Concentric circles of color, like a tie-dye shirt, reveal that random traversal produces many branching paths. Yet the shape of each path is not particularly interesting, as it tends to go in a straight line back to the root. Because random traversal extends the maze by picking randomly from the frontier, paths are never given much freedom to meander — they end up colliding with the growing frontier and terminate due to the restriction on loops.

Randomized depth-first traversal, on the other hand, is all about the meander:

Randomized depth-first traversal

This animation proceeds at fifty times the speed of the previous one. This speed-up is necessary because randomized depth-first traversal mazes are much, much deeper than random traversal mazes due to limited branching. You can see that typically there is only one, and rarely more than a few, active branches at any particular depth.

Now Prim’s algorithm on a random graph:

Randomized Prim’s

This is more interesting! The simultaneously-expanding florets of color reveal substantial branching, and there is more complex global structure than random traversal.

Wilson’s algorithm, despite operating very differently, seems to produce very similar results:

Wilson’s

Just because they look the same does not mean they are. Despite appearances, Prim’s algorithm on a randomly-weighted graph does not produce a uniform spanning tree (as far as I know — proving this is outside my area of expertise). Visualization can sometimes mislead due to human error. An earlier version of the Prim’s color flood had a bug where the color scale rotated twice as fast as intended; this suggested that Prim’s and Wilson’s algorithms produced very different trees, when in fact they appear much more similar than different.

Since these mazes are spanning trees, we can also use specialized tree visualizations to show structure. To illustrate the duality between maze and tree, here the passages (shown in white) of a maze generated by Wilson’s algorithm are gradually transformed into a tidy tree layout. As with the other animations, it proceeds by depth, starting with the root and descending to the leaves:

Wilson’s

For comparison, again we see how randomized depth-first traversal produces trees with long passages and little branching.

Randomized depth-first traversal

Both trees have the same number of nodes (3,239) and are scaled to fit in the same area (960×500 pixels). This hides an important difference: at this size, randomized depth-first traversal typically produces a tree two-to-five times deeper than Wilson’s algorithm. The tree depths above are _ and _, respectively. In the larger 480,000-node mazes used for color flooding, randomized depth-first traversal produces a tree that is 10-20 times deeper!

#Using Vision to Think

This essay has focused on algorithms. Yet the techniques discussed here apply to a broader space of problems: mathematical formulas, dynamical systems, processes, etc. Basically, anywhere there is code that needs understanding.

Shan Carter, Archie Tse and I recently built a new rent vs. buy calculator; powering the calculator is a couple hundred lines of code to compute the total cost of renting or buying a home. It’s a simplistic model, but more complicated than fits in your head. The calculator takes about twenty input parameters (such as purchase price and mortgage rate) and considers opportunity costs on investments, inflation, marginal tax rates, and a variety of other factors.

The goal of the calculator is to help you decide whether you should buy or rent a home. If the total cost of buying is cheaper, you should buy. Otherwise, you should rent.

Except, it’s not that simple.

To output an accurate answer, the calculator needs accurate inputs. While some inputs are well-known (such as the length of your mortgage), others are difficult or impossible to predict. No one can say exactly how the stock market will perform, how much a specific home will appreciate or depreciate, or how the renting market will change over time.
Is It Better to Buy or Rent?

We can make educated guesses at each variable — for example, looking at Case–Shiller data. But if the calculator is a black box, then readers can’t see how sensitive their answer is to small changes.

To fix this, we need to do more than output a single number. We need to show how the underlying system works. The new calculator therefore charts every variable and lets you quickly explore any variable’s effect by adjusting the associated slider.

The slope of the chart shows the associated variable’s importance: the greater the slope, the more the decision depends on that variable. Since variables interact with each other, changing a variable may change the slope of other charts.

This design lets you inspect many aspects of the system. For example, should you make a large down payment? Yes, if the down payment rate slopes down; or no, if the down payment rate slopes up, as with a higher investment return rate. This suggests that the optimal loan size depends on the difference between the opportunity cost on the down payment (money not invested) and the interest cost on the mortgage.

So, why visualize algorithms? Why visualize anything? To leverage the human visual system to improve understanding. Or more simply, to use vision to think.

#Related Work

I mentioned Aldo Cortesi’s sorting visualizations earlier. (I also like Cortesi’s visualizations of malware entropy.) Others abound, including: sorting.at, sorting-algorithms.com, and Aaron Dufour’s Sorting Visualizer, which lets you plug in your own algorithm. YouTube user andrut’s audibilizations are interesting. Robert Sedgwick has published several new editions of Algorithms since I took his class, and his latest uses traditional bars rather than angled lines.

Amit Patel explores “visual and interactive ways of explaining math and computer algorithms.” The articles on 2D visibility, polygonal map generation and pathfinding are particularly great. Nicky Case published another lovely explanation of 2D visibility and shadow effects. I am heavily-indebted to Jamis Buck for his curation of maze generation algorithms. Christopher Wellons’ GPU-based path finding implementation uses cellular automata — another fascinating subject. David Mimno gave a talk on visualization for models and algorithms at OpenVis 2014 that was an inspiration for this work. And like many, I have long been inspired by Bret Victor, especially Inventing on Principle and Up and Down the Ladder of Abstraction.

Jason Davies has made numerous illustrations of mathematical concepts and algorithms. Some of my favorites are: Lloyd’s Relaxation, Coalescing Soap Bubbles, Biham-Middleton-Levine Traffic Model, Collatz Graph, Random Points on a Sphere, Bloom Filters, Animated Bézier Curves, Animated Trigonometry, Proof of Pythagoras’ Theorem, and Morley’s Trisector Theorem. Pierre Guilleminot’s Fourier series explanation is great, as are Lucas V. Barbosa’s Fourier transform time and frequency domains and an explanation of Simpson’s paradox by Lewis Lehe & Victor Powell; also see Powell’s animations of the central limit theorem and conditional probabilities. Steven Wittens makes mind-expanding visualizations of mathematical concepts in three dimensions, such as Julia fractals.

In my own work, I’ve used visualization to explain topology inference (including a visual debugger), D3’s selections and the Fisher–Yates shuffle. There are more standalone visualizations on my bl.ocks. If you have suggestions for interesting visualizations, or any other feedback, please contact me on Twitter.

Thank you for reading! June 26, 2014 Mike Bostock

03 Jul 12:09

The Great Secession

Pedro

From TheOldReader...

A few months ago, an odd news story out of St. Louis caught my eye. A Christian-owned dog-walking business had fired, so to speak, a customer who supported legalizing marijuana. “We simply said it was against the idea of being clean and sober-minded and treating your body as a temple to the Holy Spirit,” one of the service’s owners told The Huffington Post.

The service, Pack Leader, Plus (motto: “Faith. Family. Dogs.”), is not alone in its determination to shut its doors to un-Christian custom. Religious business owners have declined to provide services for gay weddings and commitment ceremonies and refused to offer insurance that covers certain kinds of contraception (as in the Hobby Lobby case that came through the Supreme Court this term). Mississippi passed legislation in April allowing businesses to claim a religious defense if sued for discrimination; Arizona almost passed such a law (after intense debate, the governor vetoed it); similar measures are in the offing elsewhere. The apparent aim of these bills is to let people like caterers, bakers, photographers, and florists decline to provide services for gay weddings or gay-pride events. But the laws are written broadly and could be used to defend discrimination of many sorts. “We’re trying to protect Missourians from attacks on their religious freedom,” the sponsor of one such bill told The Kansas City Star.

I am someone who believes that religious liberty is the country’s founding freedom, the idea that made America possible. I am also a homosexual atheist, so religious conservatives may not want my advice. I’ll give it to them anyway. Culturally conservative Christians are taking a pronounced turn toward social secession: asserting both the right and the intent to sequester themselves from secular culture and norms, including the norm of nondiscrimination. This is not a good idea. When religion isolates itself from secular society, both sides lose, but religion loses more.

Over the decades, religious traditionalists’ engagement with American secular life has waxed and waned. After the public-relations disaster of the Scopes evolution trial in the 1920s, many conservative Christians recoiled from politics, only to come out swinging in the 1970s, when the Moral Majority and other elements of what came to be called the religious right burst onto the scene. If you believe in cultural cycles, perhaps we’re due for another withdrawal. Certainly, the breakthrough of gay marriage has fed disillusionment and bewilderment. “I suspect the initial reaction among evangelicals is going to be retreat and hope to be left alone,” Maggie Gallagher, a prominent gay-marriage opponent, recently told The Huffington Post.

As far as I know, it never occurred to Catholic bakers to tell remarrying customers to take their business elsewhere.

Still, the desire to be left alone takes on a pretty aggressive cast when it involves slamming the door of a commercial enterprise on people you don’t approve of. The idea that serving as a vendor for, say, a gay commitment ceremony is tantamount to “endorsing” homosexuality, as the new religious-liberty advocates now assert, is a far-reaching proposition, one with few apparent outer boundaries in a densely interwoven mercantile society. It suggests a hair-trigger defensiveness about religious identity that would have seemed odd just a few years ago. As far as I know, during the divorce revolution it never occurred to, say, Catholic bakers to tell remarrying customers, “Your so-called second marriage is a lie, so take your business elsewhere.” That would have seemed not so much principled as bizarre.

Why the hunkering down? When I asked around recently, a few answers came back. One is the fear that traditional religious views, especially about marriage, will soon be condemned as no better than racism, and that religious dissenters will be driven from respectable society, denied government contracts, and passed over for jobs—a fear heightened by well-publicized stories like the recent one about the resignation of Mozilla’s CEO, who had donated to the campaign against gay marriage in California. After a talk I gave recently in Philadelphia on free speech, a woman approached me claiming that the school system where she works harasses and fires anyone who questions gay marriage. I wanted to point out that in most states it’s perfectly legal to fire people just for being gay, whereas Christians enjoy robust federal and state antidiscrimination protections, but the look in her eyes was too fearful for convincing. Perhaps it is natural for worried people to daydream about some kind of escape. One Christian acquaintance told me, “I say half jokingly to my wife, ‘Where do we move?’ ”

Gay-rights supporters kiss after learning that Arizona Governor Jan Brewer has vetoed a bill designed to protect businesses that refuse to serve gay customers. (Ross D. Franklin/AP)

A second factor is the failed promise of what seemed, around the turn of the millennium, to be a grand new partnership between our elected and religious leaders. John DiIulio, a University of Pennsylvania political scientist, remembers that time vividly: He was the founding director of the White House Office of Faith-Based and Community Initiatives, under President George W. Bush. In 1999, he recalls, Vice President Al Gore and then–Texas Governor Bush had thrown their support behind a dramatic expansion of government’s collaboration with faith-based groups, in an effort to ameliorate social problems like poverty, hunger, and family breakdown; a new secular-religious entente seemed aborning. But trust eroded, DiIulio says, and then collapsed as factions on both sides, especially the right, drew red lines, set conditions, and lawyered up. Now it’s the “war on religion” versus the “war on women,” and court dockets are full of religious-liberty cases. (Hobby Lobby is just one in a series.) “The lines have hardened so much,” DiIulio says.

Finally, a new generation brought changed attitudes. Ed Whelan, the president of the culturally conservative Ethics and Public Policy Center in Washington, D.C., and a Catholic, told me, “Those of us growing up in the 1960s and 1970s grew up with an assimilationist ethic: there was assumed to be little or no tension in being a Catholic in the broader American culture. Today, those of us who are parents see conflict all over the place. And we strive to be Catholics throughout our lives. As the culture has become less hospitable to religious beliefs, there is a greater need to be more vigilant. We’ve got to figure out where to draw the lines.”

So a lot of line-drawing is going on. Even dog-walkers are drawing lines.

I must sadly acknowledge that there is an absolutist streak among some secular civil-rights advocates. They think, justifiably, that discrimination is wrong and should not be tolerated, but they are too quick to overlook the unique role religion plays in American life and the unique protections it enjoys under the First Amendment. As a matter of both political wisdom and constitutional doctrine, the faithful have every right to seek reasonable accommodations for religious conscience.

The problem is that what the social secessionists are asking for does not seem all that reasonable, especially to young Americans. When Christian businesses boycott gay weddings and pride celebrations, and when they lobby and sue for the right to do so, they may think they are sending the message “Just leave us alone.” But the message that mainstream Americans, especially young Americans, receive is very different. They hear: “What we, the faithful, really want is to discriminate. Against gays. Maybe against you or people you hold dear. Heck, against your dog.”

I wonder whether religious advocates of these opt-outs have thought through the implications. Associating Christianity with a desire—no, a determination—to discriminate puts the faithful in open conflict with the value that young Americans hold most sacred. They might as well write off the next two or three or 10 generations, among whom nondiscrimination is the 11th commandment.

There is, of course, a very different Christian tradition: a missionary tradition of engagement and education, of resolutely and even cheerfully going out into an often uncomprehending world, rather than staying home with the shutters closed. In this alternative tradition, a Christian photographer might see a same-sex wedding as an opportunity to engage and interact: a chance, perhaps, to explain why the service will be provided, but with a moral caveat or a prayer. Not every gay customer would welcome such a conversation, but it sure beats having the door slammed in your face.

This much I can guarantee: the First Church of Discrimination will find few adherents in 21st-century America. Polls find that, year by year, Americans are growing more secular. The trend is particularly pronounced among the young, many of whom have come to equate religion with intolerance. Social secession will only exacerbate that trend. It is a step toward precisely the future that brought such fear to the eyes of that woman in Philadelphia. For religious traditionalists, it is a step toward isolation and opprobrium—a step bad for society, but even worse for religion. So please, you people in St. Louis: walk those dogs, for God’s sake.

01 Jul 02:13

What’s Your Local Food Culture?

by Gracy Olmstead

Simon Preston noticed that most areas of Britain don’t have a vibrant food culture. Besides obvious place-tied dishes—things like Cornish pasties—few other dishes had a distinctive regional trademark. In an article at the GuardianPreston writes that many Brits have developed a rather globally encompassing attitude toward food:

We’re a population that grazes dishes from across the world and, for the most part, we feel no more connected to a local dish than we do to a curry. When travelling abroad, we’re quite taken with the regional dishes that appear again and again, but closer to home, local food culture is still a fairly new idea, mostly driven by the trend-led efforts of creative chefs and encouraged by food hobbyists.

Eating international cuisine isn’t a problem—but, as Preston points out, there are benefits to having a local food culture, as well. So he asks this interesting question: is it possible to invent a food culture in the 21st century? He decided to try and create one in the rural Aberdeenshire town of Huntly:

I set up a dining table and chairs in the supermarket and used tea and cake to entice shoppers to join me. Chattering families, reminiscing pensioners and bemused workers who had raced in for a ready meal shared their stories: how they came to Huntly, why they stayed, places they had loved and lost, ghost stories and tall tales.
… A huddle of local chefs gathered and soon, my dossier of local anecdotes became dishes. The ancient standing stones in the town square were represented in the positioning of prize-winning local haggis bonbons on a plate. Barley appeared in a risotto, which in turn referenced the Italian connection found in so many Scottish towns. A schoolgirl’s tale of a JK Rowling manuscript locked in the local police station safe inspired a Huntly Mess, made with local raspberries and whisky, and the Deveron river – a place where the town goes to play, to think, to celebrate and to court – brought local trout to one dish and a river bend slick of sauce to another.

The dishes began to catch on as local restaurants and pubs served them. Customers were delighted to see their stories and memories take gastronomic form. The food culture can, it seems, be invented from scratch.

It’s an interesting idea, especially for many American who have lost the culinary cultures of their past, due to the burgeoning influences of other cultures and food chains in their homelands. Excepting certain cities with distinctive gastronomic traditions, like New York City or Philadelphia, many American towns don’t have dishes to call their own. But as Preston points out, it’s never too late to begin examining local ingredients again: our states, counties, and cities offer us a wealth of history, terrain, crops, and animals with which to build a local food culture.

In the Idaho town where I grew up, corn and onion fields had a distinctive presence. Farmers grew a lot of alfalfa and mint, and there were orchards scattered here and there. We got fresh goat’s milk from one farmer, and fresh beef from another. My brothers raised chickens. There’s a local coffee roaster in the town beside us. There are a couple nearby lakes for fishing, and the Salmon River isn’t far off. It’s only a couple hour drive into the mountains, if you want fresh huckleberries.

There are also some incredible recipes, handed down over time, jotted on note cards in spidery script, that I would add to my local food culture: my grandmother’s baked beans, my aunt’s “mile-high biscuits,” grandpa’s barbecued chicken, my great-grandmother’s brown bread, and her much-coveted recipe for peach pie.

Many chefs here throughout the Northern Virginia and Washington, D.C. area love to use local produce—and they have ample resources to work with. Farms in NOVA and Maryland feature high-quality and fantastic tasting produce. With these resources, it’s entirely possible to create and curate a local flavor, to showcase the parts of your culture that are distinctively local.

Of course, this isn’t meant to demean the rich international traditions that influence our various cities—in Idaho’s capitol, Boise, there’s an entire Basque district, with its own distinctive (and incredible) food culture. Outside D.C., in Annandale, Virginia, there’s a significant Korean immigrant population, and the restaurants there are fantastic. New York City’s immigrants are part of what give the city such incredible food. The point isn’t that imported ingredients and recipes are bad—to the contrary, they help form a vibrant local food culture. Without them, our regions wouldn’t have as much culinary color and vibrancy.

But foods that are chosen from local ingredients also have a distinctive story. Whether invented or preserved, local foods help define, and give flavor, to our places. That’s why Preston invented a food culture, and why I have the beginnings of mine.

What’s your local food culture?

30 Jun 12:54

Congo: A Group of Chimpanzees Seem to Have Mastered Fire | 

Pedro

I hope this is not a publicty stunt from "Planet of the Apes" movie sequels.

bonobo

Ubundu| A group of bonobo apes living in the Salonga National Park, may have mastered the basic practice of creating and using fire. This particular group of almost three hundred specimens from this rare and extremely intelligent race of great apes, have been under close surveillance by a team of primatologist for the last three years, and seem to have recently developed a primitive fire building technique using rocks and twigs.

The bonobo, formerly called pygmy chimpanzees, is a omnivorous great ape found in a 500 000km2 area of the Congo Basin in the Democratic Republic of the Congo. It is mostly popularly known for its high levels of sexual behavior and its use of  almost a dozen different primitive tools. Its level of intelligence is already considered to be almost unique amongst ape, being topped only by humans. Two bonobos at the Great Ape Trust in Iowa, Kanzi and Panbanisha, have been even taught how to communicate using a keyboard labeled with lexigrams (geometric symbols) and they can respond to spoken sentences. Kanzi’s vocabulary consists of more than 500 English words and he has comprehension of around 3,000 spoken English words.

It is however, the first time that a group of these primates develops some technical concepts as elaborate as these on their own. A few individual apes seem to have originally developed a rudimentary technique of rather poor efficiency, but the group gradually improved it through experimentation and observation over the last few months. They are now able to create and maintain a fire, which they have been using mostly to scare off predators and cook some of their food. Some individuals in particular among the group, seem to have rapidly grown a taste for cooked foodstuffs, especially flying squirrels. This also enabled the group to develop to a population which is much larger than has ever been encountered in the species, by bringing increased security and by diversifying food sources.

This absolutely astonishing evolution has gotten primatologists as well as many other scientists from around the world, really excited. This could be a unique occasion to study the evolution of a species during a crucial moment of its history, and could bring a lot of information concerning the early developments of humankind. The congolese rural population of the other hand have a very different perception of the situation, as “torch bearing apes” have been accused of setting fire to more than 1500km2 of forest since the beginning of the year, causing the death of three people.

Bookmarked at brandizzi Delicious' sharing tag and expanded by Delicious sharing tag expander.
30 Jun 12:20

Why Brazil Is Actually Winning The Internet

BuzzFeed

In 2004, the same year Facebook launched at Harvard, Google launched a social network called Orkut that changed internet history — at least in Brazil.

John Perry Barlow, founder of the Electronic Frontier Foundation, was one of the first of the web’s elite digerati to receive an invitation. Barlow was working at the time with Brazil’s minister of culture, musician Gilberto Gil, to expand the range of Brazilian music available to remix and share online, and he decided to give all 100 of his invites to Brazilian friends. Two years later, 11 million Brazilians were on Orkut — out of only 14 million internet users in the whole country. (By comparison, the U.S. had more than 10 times as many Americans online by then, but only 14% of them were using social networks.)

“There were blogs and portals back then,” says Bia Granja, co-founder of YouPix, a website and festival dedicated to celebrating Brazilian web culture. “But when Orkut came, it pulled everyone in. There were people from rural parts of Brazil who didn’t have an official government ID card but had an Orkut profile. We needed this form of expression; it was the door of entry to the internet for 82% of Brazil’s population.”

Orkut in 2004. Orkut.com

Ten years later, with their country more visible internationally than ever thanks to successful but polarizing World Cup and Olympics bids, Brazilians are arguably the most hyper-social people on the internet. They spend twice as much time using social media as the global average, and more time online than watching TV. Last year, they doubled the time they spent on Facebook, while global usage declined by 2%.

Brazil is the fourth-largest mobile phone market in the world, with 1.4 cell phones for every citizen, and Brazilians spend more time on social media than email, web browsers, or video sharing. Half of Brazil’s internet population is under 30, and almost all of them use social media. Brazil is now the second-largest market for Facebook, Twitter, and Tumblr other than the U.S. It took Facebook seven years to take the No. 1 spot from Orkut, which is still the social network of choice for 6 million Brazilians, or 1 in 20 Brazilians online. Last year, the Wall Street Journal declared Brazil the “Social Media Capital of the Universe.”

Brazil was culturally primed for such an online impact because, very generally speaking, Brazilians are an extraordinarily warm and friendly people. They are family-focused and social and like to do things en masse. Millions of Brazilians come to the beaches on New Year’s to offer white flowers to the sea goddess Iemanja; millions gather outside to celebrate Carnaval each year; and recently millions have taken to the streets protesting billions of dollars of taxpayer money that have funded World Cup and Olympic infrastructure projects while millions still live in extreme poverty.

“Brazil has always been the social model of the future,” says Barlow. “Everything refers to something else that you wouldn’t know anything about if your aunt hadn’t told your mother something a couple of years ago about something her lover heard. Brazil is an enormous inside joke, and the internet is a mass conversation. Brazil was the internet before the internet existed.”

So then, what’s the cumulative effect of these billions of online social interactions in real life?

Shutterstock

Mauricio Cid was one of the most popular Brazilians on Orkut until he got kicked off the site in 2008. Cid was publishing mostly humor content to more than a thousand Orkut fan communities reaching 5 million Brazilians — 20% of Brazil’s internet population at the time, making him one of Brazil’s first bloggers, albeit completely inside Orkut’s walls.

By 2008, Orkut had become one of the 10 largest websites on the planet, but Google wasn’t paying much attention to what was happening among Orkut’s mostly Brazilian users at the time. Then Google tried to run ads on Orkut (aside user-generated content) and reports quickly surfaced of those ads being displayed next to pictures of naked children and abused animals. The government filed contempt charges against Google Brazil’s executives for refusing to turn over user data to the police. Globo, the largest media company in Latin America with a near-monopoly on TV, radio, and print mediums — but not online — decided to run a TV news report going inside alleged criminal activities on Orkut, and included Cid’s Michael Jackson fan page on the list of suspicious communities.

“Most of the groups on the list were neo-Nazis and things like that,” Cid told me on a Skype call from São Paulo. “It made no sense. There was nothing criminal about my Michael Jackson community, but I was banned. So I decided to create a humor blog called Não Salvo [“Not Saved”] in 2008 so I could publish whatever I wanted, including a little ass and titties, without anyone censoring me.”

Cid was living in Santos, a beachside city two hours outside of São Paulo at the time, working odd jobs at the morgue and fixing printers. “The morgue was horrible,” Cid says. “I would have taken pictures, but that was before cell phone cameras.” Then he got a job in São Paulo and started spending his four-hour bus commute publishing content to Não Salvo from his phone. Today Cid is one of Brazil’s biggest web celebrities. “It’s a testament to how much we love to share, even with backward technology.”

Não Salvo’s site still looks like it was designed in the AOL era, with a Jesus marquee and a flaming computer mouse floating over clouds of digital detritus, despite drawing 27 million visitors a month. Visitors don’t just read on the site, they interact with it, and participate in the content creation. So better to call them participants — or fieis (“the faithful”), in Não Salvo lingo.

On a forum called Desafio Aceito (“Challenge Accepted”), the fieis mobilize by the thousands and sometimes millions for challenges like crowdsourcing a porn screenplay and plotting practical jokes on gringos. In 2010 during the South Africa World Cup, they decided one of Globo’s broadcasters, Galvão, was annoying, so they launched a campaign called #CalaBocaGalvao — which translates to “Shut up, Galvão” in Portuguese.

A poster for the ‘Cala Boca Galvão’ hoax.

“We created a video with a narrator in English and everything, telling gringos that ‘Cala Boca Galvão’ meant to preserve an endangered species of birds in the Amazon,” Cid explains. The video showed how demand for feathers for Carnaval costumes was fueling a black market of bird trafficking and wiping out the endangered species, and urged viewers to save a bird’s life with a tweet. #CalaBocaGavao became a global top trending topic for 14 days and made it into the New York Times.

Não Salvo is also not afraid to get into darker subject matter, but mostly with a humorous, prankster edge, posting the crappiest banners the fieis have spotted at protests that have engulfed Brazil since last June, alternating images of fans with tacky face paint and protest paraphernalia. Another Não Salvo thread called Peço Perdão Pelo Vacilo (“forgive me my trespasses”) gathers videos Brazilian police put on YouTube brutally humiliating everyday citizens by forcing them to read statements asking for forgiveness for minor crimes like taking a selfie on top of a police vehicle.

If the internet refracts our own culture back to us, I asked Cid what he sees in the reflection of 27 million humor fans on Não Salvo. “The web is opening a space not just to show we have a voice online, but to show that we can unite, take down the government, express our opinions, and come together.”

Photograph by Julie Ruvolo

Freedom of expression is a relatively new phenomenon for the current generation of Brazilians. Brazil was run by military dictatorship from 1964 to 1985; speech was repressed and regime opponents prosecuted and tortured. Brazil’s current president, Dilma Rousseff, was tortured herself by the military in 1970 for working with guerrilla groups opposing the dictatorship. Since Brazil returned to democratic rule in 1988, poverty has been halved, and combination of economic growth and socialist policies has lifted 28 million Brazilians out of extreme poverty and another 36 million into the middle class. But Brazil still ranks among the most unequal countries on the planet, and the richest 1% of the population takes in more household income than the poorest 50%.

This inequality affects Brazil’s internet culture. Internet access is now ubiquitous among the richest Brazilians, but only 1 in 3 households in the new middle class have access, and it drops to 6% among Brazil’s poorest citizens. In the face of such tenacious inequality, young Brazilians are watching their government pour $25 billion in taxpayer money to fund stadiums and infrastructure projects for this summer’s World Cup and the 2016 Olympics. They are the most connected generation and are also the only generation currently alive that hasn’t experienced the repression of living under the dictatorship.

“When the dictatorship ended in 1985, we won the right to speak, but we didn’t win the right to be heard,” says Leonardo Eloi, a project director at Meu Rio (“My Rio”), a social mobilization platform that helps Brazilian youth in Rio de Janeiro organize around local issues they care about. “There’s a big difference between the two. So we’re creating a culture now for the government to hear its citizens.”

Meu Rio co-founders Miguel Lago and Alessandra Orofino Renato Stocklet / NA LATA

Alessandra Orofino and Miguel Lago launched Meu Rio in 2011 with the mission of making their city more inclusive. “Citizens need to organize themselves as intelligently, use technology as ubiquitously, and share knowledge as efficiently as the public and private institutions, in order to really participate in the definition of public policy at the city level,” Orofino wrote in an op-ed for Huffington Post last year.

In Orofino’s opinion, petitions aren’t the most useful tool for social change on a city level. “Especially when you’re talking about local politics, you can’t rely on sheer aggregation of numbers as a tool for pressure,” she explains. “Many campaigns initiated by our members are talking about a specific street or public square or hospital or school that only concerns a relatively small number of citizens. Who are you going to deliver a couple of thousand signatures to?”

Meu Rio has focused on designing tools such as Multitude, which lets campaign organizers connect with volunteers for tasks like creating a sign, taking pictures, or showing up for a meeting; and Nós do Meu Rio, an initiative to connect neighbors so they can get to know one another and plan new campaigns. Their most popular organizing tool is called the Pressure Cooker. When someone creates a campaign around a particular issue, this helps them find the specific official responsible for that issue and contact them directly via email or phone. Whenever said politician’s phone line is free, Meu Rio robo-calls campaigners who provided their phone numbers and connects the call, creating a line of people calling all day long about the same issue.

In the last two and a half years, 150,000 Rio residents — overwhelmingly Brazilian youth under 34 years old — have used Meu Rio to tighten Rio’s woefully unprotective environmental protection laws, demand transparency from bus companies hiking fares, and even amend their constitution to keep officials convicted of corruption from occupying high offices in Rio, a valid concern in a country where former President Fernando Collor was impeached on corruption charges in 1992 and went on to become a senator again in 2007.

Meu Rio receives requests across Brazil to implement their tools in other cities, but only had the resources to support their work in Rio — until now. The organization received a $475,000 grant last year from the Omidyar Network, an investment firm funded by eBay founder Pierre Omidyar to foster advancement in government transparency and social media, and received a $500,000 grant from Google.org, the charitable investment arm of Google, to expand access to Meu Rio’s organizing tools across the country. (Incidentally the most famous recipient of Omidyar’s wealth, journalist Glenn Greenwald, also lives in Rio.)

One of the more creative uses of Meu Rio’s social local mobilization tools is a crowdsourced neighborhood watch program to stop people from pilfering trolley cars in the bohemian neighborhood of Santa Teresa. Since 1877, residents in Santa Teresa and the adjacent Morro do Prazeres favela relied on the bondinho, a yellow trolley, to climb a steep hill of cobblestone switchbacks to their neighborhoods. In 2011, the trolley went off its rails and killed six people, and Rio discontinued the service indefinitely. Despite a series of promises and timelines, the city has yet to bring the historic and beloved bondinho back into operation.

Last year, bondinho aficionados started noticing that pieces of the historic trolleys were starting to show up for sale all over the city, and the only place they could have come from is the trolley warehouse in Santa Teresa. Residents started a campaign on Meu Rio demanding respect for their historic patrimony and put a plan in place to monitor what was left. Nine hundred and eighteen people put pressure on Rio’s transportation secretary to publish an inventory of the trolley warehouse, and 2,748 residents signed up to monitor a live webcam feed Meu Rio set up in a neighbor’s window overlooking the warehouse entrance. Leonardo Eloi, who handled the technical production for the campaign, says that not a single piece has gone missing since.

But Trolley parts are far from the most pressing need in a city where almost half of its residents don’t even have access to basic sanitation. “If I had to choose the most important issue in Rio,” Orofino says, “it’s that we are such a crazily unequal city, in virtually every regard.”

Photograph by Julie Ruvolo

Rio has long had a reputation as Brazil’s Cidade Partida, or “Divided City,” and despite economic growth that has lifted millions of Brazilians out of poverty, socioeconomic inequality in Rio de Janeiro has only increased in the last decade. About one-third of the city’s 6 million inhabitants live in some 1,000 favelas scattered across the city, a network of communities so diverse in topography and culture that their only commonality is a history of neglect by the state to provide even the most basic services.

While Brazilians are reticent to point out the obvious, the warped socioeconomic spectrum maps closely to race. Brazil imported almost 5 million slaves — more than 10 times the number that were forced to come to the North America — and was the last country in the Western Hemisphere to abolish slavery in 1888. Many early favela residents in Rio were freed slaves from the northeast who had no where else to live and settled wherever they could. Today, more than half of Brazilians identify as black or mixed and race; that percentage is higher in the favelas.

In the decade since Brazilians came online with Orkut, they’re starting to close disparity gaps with internet access that remain stubbornly persistent in the offline world, and the internet is starting to emerge as a powerful means of expression for favela residents. Since 2003, a Rio favela advocacy nonprofit Catalytic Communities (CatComm) had operated a community center in Rio’s port area. Between then and 2008, more than 1,200 leaders from 200 communities accessed computers and horizontal training, teaching one another how to use the web.

These stakes are only rising. Soon after Brazil won the Olympic bid in 2009, city officials arrived at Favela do Metrô, a small community of 700 families behind the Maracanã soccer stadium, which was founded in 1981 by workers who built the adjacent metro stop. Officials began spray-painting numbers on houses, making notes, and taking photos. “We realized we were going to have to leave,” Francecleide Costa, president of the Favela do Metrô Resident’s Association, said at the time. “But we didn’t know what to do.” On Nov. 4, police returned, forced 107 families out of their homes at gunpoint, threw their belongings into the street and started smashing windows and roofs. Their justification was to take over the land to expand the Maracanã parking lot for World Cup.

CatComm sent a volunteer to photograph the eviction, and within a few months had gathered video documentation of three other favelas in Rio’s West Zone that had been forcibly evicted to make way for the Olympic City.

“There were almost no evictions in Rio favelas for the 20 years prior, because we’ve had very strong constitutional rights since 1988,” says Theresa Williamson, who founded CatComm in 2000. According to federal law, anyone who’s been squatting on private land for more than five years without any cases brought against them is legally entitled to the land, and even has legal recourse if it’s public land. “These evictions in Rio are only made possible because the mega-events [like the World Cup and Olympics] have created a state of exception,” Williamson says, “where the city can get away with it.”

According to the Popular Committee of World Cup and the Olympics, a volunteer coalition of civil society organizations and community leaders documenting the effects of the mega-events, the eviction tally has topped 4,772 families across 37 favelas in Rio alone, and as many as 200,000 families across Brazil. Officials I spoke with at Rio’s City Hall in late 2013 denied that any illegal evictions have taken place, and Globo, Brazil’s largest media company headquartered in Rio, has largely ignored them. A search for “evictions Rio de Janeiro” on Globo.com returned only 14 news stories, four of which related to a group of residents being evicted from a botanical garden for environmental concerns in Rio’s wealthier South Zone near Globo’s headquarters.

People coming together online are capturing and telling a different story: Media activist collective MidiaNINJA, an acronym for “Independent Narratives, Journalism, and Action,” broadcast 18 different live streams of the most recent Favela do Metrô eviction alone. In 2012, Rio’s Popular Committee of World Cup and the Olympics used its Facebook page to crowdsource videos of forced evictions, asking favela leaders, digital activism networks, and local and international journalists for contributions. After gathering and verifying 114 videos of forced evictions in 21 favelas, the committee concluded that the most common problem was a violation of favela residents’ rights to information. “If each video identified by the curation represents an isolated dot in the sky,” the report concludes, “the sum of those points forms a constellation that portrays the systematic pattern of human rights violations in areas affected by forced evictions in Rio de Janeiro.”

In the wake of the 2010 evictions, CatComm began offering educational programming to favela community leaders that focused on social media — Facebook, Twitter, YouTube, WordPress, and Blogger. Once the network of favela leaders was connected on social media, Williamson started seeing reports of evictions popping up in her Twitter feed and Facebook posts, and launched RioOnWatch.org, a favela community watchdog site, to bring their reporting together in one place, address the general absence and frequent misrepresentation of favelas in the Brazilian media, and encourage and amplify reporting from favela residents themselves.

“All of a sudden you had kids in the North Zone posting on their private walls about evictions two hours away in the West Zone, and threads with hundreds of comments in an hour, all analyzing what was going on,” says Williamson. She estimates CatComm has worked with 300–400 of Rio’s 1,000 or so favelas. “The connections were incredible.” According to Rio’s Observatory of Favelas, an NGO monitoring social inequality, 90% of favela youth in Rio de Janeiro have access to the internet, and Facebook is their primary destination. “They’re connected online, they’re very critical, and incredibly astute. They see what’s happening and they’re living the consequences,” Williamson says.

RioOnWatch has since become the biggest source of news on Rio’s evictions available online. Its documentation of forced evictions in a favela called Largo do Tanque in 2012 prompted international coverage from CNN to Newsweek Japan. Williamson says that because of the presence of international media once the evictions began, the remaining eight families of Largo to Tanque received five times the compensation as the 42 who were pushed out when nobody was watching.

“Before we started doing this work, there was no favela perspective in articles about them internationally or nationally,” Williamson says. “We started seeing that we could change this narrative as international media came to Rio for the mega-events — we could create this spotlight on Rio through the 2016 Olympics. In our opinion that’s one of the only positive legacies of the games.”

Brazilians hold a demonstration with signs that read, “Speak up now or remain silent forever,” and “Don’t suffer in silence,” in Sao Paulo June 22, 2013. Brazil / Reuters

This trend — a largely invisible population starting to be seen and heard — is occurring nationwide. Last June, residents of São Paulo went to the streets to protest a hike in bus fare and sparked commiseration among disenfranchised citizens across the country. Brazilians took to Twitter, Facebook, and YouTube with calls to Vem pra rua (“come to the streets”), Acorda, Brasil (“wake up, Brazil”), and O gigante acordou (“we’ve wakened the giant”). An initially violent reaction from police that has become a fixture of 12 months of protests inspired a video game where protesters run away from police that gained 30,000 players in three days during last June’s protests. Within a couple of weeks, protest messages on social media had reached 79 million Brazilians online — or 4 out of 5 internet users — and brought millions of people to the protests, from public school teachers to trash workers rallying for better benefits to activists protesting billions of government spending on mega-event infrastructure projects in the face of sweeping social inequality.

“We don’t make a difference by sitting behind our computers,” said Marcelo Tas, a Brazilian journalist with more than 5 million Twitter followers, during a newscast with BandnewsTV the first week of the protests last June. “We’re meeting up in the streets. And it’s not just happening in Rio and São Paulo. Small towns in the interior are protesting. We have a whole country protesting.”

“It’s a really important moment we’re living right now,” says Bia Granja, of the YouPix festival, which gathers hundreds of thousands together at festivals across the country, and millions together online, to organize around digital issues they care about. Last year’s festival in Rio hosted debates about Brazil’s internet legislation, the persecution of the Rio funk movement, and Globo’s factually inaccurate reporting of last June’s protests, interspersed with food contests, MC battles, a well-attended workshop for YouTube content producers (and Havaianas giveaways).

“We’re seeing big changes,” Granja says. “Social networks are tools of empowerment we didn’t have before.”

Ronaldo Lemos Photograph by Julia Ruvolo

As internet access expands across geographic and class divides in Brazil, the government is taking steps to regulate it, and citizens are taking steps to ensure their freedoms to use it. In 2007, Brazilian Congress was debating a cybercrime bill that would send people to jail for four years for jailbreaking their iPhones or downloading illegal music. Once the bill passed lower congress and looked like it had an actual chance to become law, a law professor at Fundação Getúlio Vargas named Ronaldo Lemos wrote an op-ed in the Folha de São Paulo newspaper arguing that what Brazilians needed instead was a bill of rights protecting their civil liberties.

In 2009, Brazil’s minister of justice approached Lemos and a group of lawyers and told them if they wanted to create a law protecting internet rights, they should do it online. Marco Civil, a landmark bill ensuring the privacy and freedom of speech of internet users, and one of the first examples of crowdsourced legislation in the world, was born.

“A lot of people do this participatory work and expect people to show up, but we decided to ask them,” Lemos explains. They used the Ministry of Justice official letterhead to send letters out to the public and solicit participation, and in a remarkably progressive move for 2009, started running Twitter searches for “Marco Civil,” and incorporating their distributed perspectives into the drafting process.

“Side by side you had the users, the telco companies, broadcasters, and trade groups, all of them debating side by side with the general public — something you generally do not see,” says Lemos. “But it took us a lot of time. It was very satisfying to hear people tell us we really heard what they had to say.”

By the time the bill was ready to go to Congress, more than 2,000 Brazilians, including librarians, LAN house owners, high school professors and bloggers, had collaboratively drafted Brazil’s Internet Bill of Rights, legally guaranteeing internet users the right to personal privacy and freedom of expression, and ensuring net neutrality. Then it got stuck.

“The bill had important language protecting net neutrality, and the telco companies didn’t like that,” Lemos says. “When Congress tried to do bad things, people would organize Twitter protests, online petitions and so on.” Two years went by without a vote.

Then, in September 2013, GloboTV, Brazil’s biggest TV channel and second-largest TV network in the world behind ABC, reported that the National Security Agency was spying on President Dilma Rousseff, according to documents leaked by Edward Snowden and Glenn Greenwald. Rousseff was furious, canceled her planned state dinner with President Obama, and started exploring new digital security measures, including a plan to require foreign companies like Facebook and YouTube to keep their Brazilian user data on local servers inside Brazil, and a national encrypted mail service to be provided by Correios, the Brazilian equivalent of the U.S. Postal Service. They decided to tuck the new provisions into Marco Civil (although neither made it in the final legislation) and suspended voting on any other bills until Congress voted on the internet legislation.

The 2013 YouPix Festival in Rio de Janeiro. Photograph by Julie Ruvolo

YouPix was one of the first digital organizations to rally its community in favor of net neutrality, explaining why the provision was important to guarantee freedom of speech online, and publishing real-time updates on the bill’s status until it was passed this April.

“A study at the Fundação Getulio Vargas law school showed that for every 10% increase in a person’s digital inclusion, they are 2.2% happier,” says Granja. “In other words, just having access to a computer makes people more happy. Imagine what the social and cultural inclusion of the internet can do for a person’s happiness. That’s why the web is so explosive here. It’s our voice, our identity, our channel of expression. The internet represents us.”

Other organizations joined in: Meu Rio started a “Save the Internet” campaign that mobilized 11,000 Brazilians to email their congressmen in favor of net neutrality. MidiaNINJA, an activist media collective that rose to prominence during the protests that swept across Brazil last year, started tweetcasting the congressional sessions. Dozens of civil society organizations joined together to install a big screen in Congress and display messages from Brazilians in support of the bill. And an Avaaz.org petition by former Minister of Culture Gilberto Gil gathered 300,000 signatures in 48 hours.

After a six-month stalemate, Congress passed the bill in April, and more than 70% of the approved text had been drafted through the collaborative process. “And the additional 30% is pretty good because of all the people who followed the bill’s progress,” Lemos says.

Article 4 of Marco Civil promotes “the right of access to the internet to all” and article 27 asserts that “public initiatives to promote digital culture and promote the internet as a social tool should: I) promote digital inclusion; and II) seek to reduce gaps, especially between different regions of the country, access and use of information technology and communication.”

“It’s a pretty strong bill, especially when you compare it to the crazy laws a lot of other countries are passing,” says Lemos. The same month Marco Civil passed in Brazil, Russia passed a law giving the government broad control over how information is disseminated on the internet. And this February, Turkey passed a law giving the government power to censor websites without a court order, despite protests from more than 100,000 Turkish citizens on Twitter. Brazil has also successfully defeated censorship legislation similar to SOPA/PIPA in the U.S.

When I asked Lemos what he thought made Brazil’s web culture distinctly Brazilian, he pointed to a comment John Perry Barlow made in 2003 when they were working together to bring Creative Commons to Brazil. “He said, ‘It’s interesting Brazilians like the internet so much, and I know why — it’s because you were a networked society even before the internet,’” says Lemos.

“And I believe that right now Brazil has become a laboratory for experimenting with technology and participation. Marco Civil was a leading case for that. It showed that technology and democracy actually go well together, and one can learn from the other.”

Check out more articles on BuzzFeed.com!

27 Jun 02:15

Vítimas das probabilidades

by Athayde Tonhasca Jr.

Em dezembro de 1996, em Cheshire (Inglaterra), Sally Clark chamou uma ambulância para socorrer Christopher, seu filho de 11 semanas de idade, que desfalecera após ter sido posto na cama. A criança foi levada para o hospital, mas morreu pouco depois. Segundo o legista, o menino fora vítima de infecção respiratória associada à Síndrome da Morte Súbita do Lactente (SMSL) ou ‘morte do berço’.

A SMSL é rara e tem causas desconhecidas: bebês com menos de um ano e aparentemente saudáveis morrem subitamente. Assim, a morte de Christopher causou consternação, mas foi considerada uma fatalidade.

Em 1998, no entanto, o segundo filho de Sally, Harry, de oito semanas, morreu em circunstâncias semelhantes. O legista – o mesmo que havia examinado Christopher – notou sinais de que o bebê poderia ter sido sacudido com violência. Desconfiado, consultou suas anotações sobre a autópsia de Christopher e concluiu que sua morte poderia ter sido causada por sufocamento.

Essas incertezas não foram despropositadas: os sintomas envolvidos são particularmente difíceis de diagnosticar, em especial em recém-nascidos. Nos dois casos, os corpos das crianças exibiram sinais de trauma, mas estes eram consistentes com sequelas das medidas de primeiros socorros usadas para tentar ressuscitá-las.

A coincidência de dois irmãos sucumbirem ao mesmo mal raro convenceu o legista de que as mortes não foram naturais

Porém, a coincidência de dois irmãos sucumbirem ao mesmo mal raro convenceu o legista de que as mortes não foram naturais. A polícia foi alertada e, um mês após a perda da segunda criança, Sally e o marido foram presos e acusados da morte dos filhos.

Após determinar que a advogada estava sozinha com os filhos nos dois incidentes, a polícia retirou a acusação contra o marido. Sally Clark foi indiciada por duplo infanticídio: Christopher teria sido sufocado; Harry, sacudido violentamente. A advogada foi levada a júri popular, e o caso recebeu ampla cobertura da imprensa.

No julgamento, a promotoria revelou episódios de depressão e consumo de álcool da ré, mas não produziu provas materiais de maus-tratos, já que as conclusões do legista estavam longe de ser definitivas. Seguiram-se dias de pareceres complexos – e muitas vezes contraditórios – de patologistas, psiquiatras, neurologistas e pediatras, mas nada suficiente para derrubar a tese da defesa: os meninos teriam sido vítimas da SMSL.

A acusação então convocou Roy Meadow, renomado pediatra, para testemunhar como especialista em SMSL: a opinião de Meadow selou o destino de Sally Clark.

Lei de Meadow 

Pelas estatísticas governamentais, a incidência da SMSL era de 1 caso para cada 1.300 nascimentos em toda a população da Inglaterra, mas para famílias de alta renda em que a mãe tem mais de 26 anos e não fuma – como era o caso de Sally Clark – a incidência cai para um caso em cada 8.540 nascimentos. Meadow usou esse dado para calcular a probabilidade de dois filhos morrerem por SMSL simplesmente multiplicando 1/8.540 × 1/8.540, que resulta em cerca de 1 em 73 milhões.

Comparando esse valor com o número médio de nascimentos na Inglaterra (650 mil crianças por ano), o perito estimou o número de casos esperados de duas mortes por SMSL na mesma família.

Dados
O caso da advogada Sally Clark é um exemplo marcante das consequências do desconhecimento e dos mal-entendidos quanto às chamadas probabilidades condicionais. (foto: Gabriel Doyle/ Freeimages)

No que ficou conhecido ironicamente como ‘lei de Meadow’, o pediatra concluiu: “uma morte por SMSL é uma tragédia, duas mortes são suspeitas e três mortes são, salvo prova do contrário, assassinatos”. Em vista desse parecer, a promotoria argumentou que a probabilidade de dois irmãos morrerem de SMSL é ínfima a ponto de poder ser descartada, e a única explicação seria premeditação: a ré teria matado os próprios filhos.

Apesar das incertezas dos patologistas, o testemunho de Meadow foi suficiente: em novembro de 1999, Sally Clark foi condenada à prisão perpétua.

O veredito chocou médicos e assistentes sociais, pois o perfil e o histórico de Sally não eram compatíveis com os padrões observados em casos de abuso infantil, que quase sempre têm antecedentes (prontuários hospitalares, queixas na polícia, testemunho de vizinhos etc.). Nesse caso, nada havia.

Hipótese falsa 

Políticos, jornalistas e ativistas de direitos civis iniciaram campanhas para limpar o nome da advogada, mas a polêmica não arrefeceu: como conciliar a morte de duas crianças visivelmente saudáveis com a improbabilidade da causa dessas mortes? 

A primeira rachadura no raciocínio de Meadow veio à tona quando estatísticos apontaram o possível equívoco do pediatra ao estimar o risco de casos duplos de SMSL: assumir indevidamente a independência de fatores. 

Os juízes não foram convencidos: apesar de reconhecerem o possível equívoco, consideraram válido o princípio do argumento

A multiplicação das probabilidades (1/8.540 × 1/8.540) só seria válida se cada ocorrência da SMSL fosse independente, ou seja, se o fato de uma criança ter sido vítima não afetasse em nada a possibilidade de um irmão também ser vítima. Em outras palavras, não haveria predisposição para a SMSL entre familiares.

Porém, os especialistas intuitivamente desconfiaram que essa hipótese era falsa: afinal, quaisquer que sejam os fatores genéticos ou ambientais que levam à SMSL, eles devem atuar mais intensamente na mesma família. Em outubro de 2000, os advogados de Sally Clark entraram com recurso para anular a sentença, citando o problema com os cálculos de Meadow. Mas os juízes não foram convencidos: apesar de reconhecerem o possível equívoco, consideraram válido o princípio do argumento.

Sally Clark permaneceu presa. 

Coube a Ray Hill, professor da Universidade de Salford (Inglaterra), debruçar-se sobre centenas de casos de SMSL para analisar os dados que pareciam conspirar contra a advogada.

 

Você leu apenas o início do artigo publicado na CH 315. Clique aqui para acessar uma edição resumida da revista e ler o texto completo. 

 

Athayde Tonhasca Jr.
Scottish Natural Heritage, Perth (Reino Unido) 

25 Jun 01:20

Deepak Chopra embarrasses himself by offering a million-dollar prize

I realize now that Chopra’s affliction with Maru’s Syndrome—the condition described by Dr. Maru as “When I see a box, I cannot help but enter”—is a chronic condition. Although Chopra feints at friendliness, enticing me to his fancy conference with a big honorarium, and trying to pal around with Michael Shermer (who, I suspect, doesn’t like Chopra), in reality he’s as thin-skinned as ever. What has really wounded Deepakity is the intimation by people like Shermer, Dawkins, Sam Harris, and me, that he is not a scientist.  After all, the old Woomeister makes his millions by putting on a veneer of science, bandying about nonsense phrases like “quantum consciousness” that bamboozle those who are impressed by science, but don’t understand it.  Were they to understand that there is no hard science behind Chopra’s claims, perhaps they’d be less likely to open their wallets.

At any rate, the video below is one more sign of Chopra’s butthurt.  In response to James Randi’s famous “million-dollar challenge,” in which the magician has a standing offer of a million dollars to anyone who can produce a convincing demonstration of the paranormal, Chopra has issued his own challenge.

You probably know Randi’s challenge: if anyone can produce a convincing demonstration of the paranormal under conditions specified and controlled by Randi and his colleagues, that person wins a million bucks. But nobody’s ever won it.  Every year, some sap tries for the prize with a demonstration at Randi’s “The Amazing Meeting” (TAM), and every year the sap fails. Last year, when I was there, someone claimed to have the power of remote viewing. But he couldn’t reproduce it under strictly controlled conditions.  The provisional explanation for all the failures is that there are no paranormal phenomena.

Deepak offers his own challenge in the first 55 seconds of the 5.5 minute video:

“Please explain the so-called ‘normal’: how does electricity going to the brain become the experience of a three-dimensional world of space and time. If you can explain that, then you get a million dollars from me. Explain and solve the hard problem of consciousness in a peer-reviewed journal; offer a theory that is falsifiable—and you get the prize.”

(Thanks to Sharon Hill at Doubtful News for calling this to my attention.)’

Chopra spends the remaining 4.5 minutes insulting skeptics—calling us “naive realists,” “superstitious,” and “bamboozled by matter,” —and saying that he won’t accept neural correlates of consciousness as its explanation. (In the end, though, that’s how the problem will be cracked, if it is cracked.) He’s so eager to parade his “clever” challenge that he simply repeats it over and over again, interspersed with nasty cracks about how self-congratulatory we skeptics are in our rejection of the paranormal.

Watch this embarrassing demonstration of Maru’s Syndrome:

Now the “Hard Problem” of consciousness is the problem of qualia—subjective sensation.  The Hard Problem, then, is to show how the electrical impulses of our brain, generated by the environment or our inner workings, give rise to sensations of pain, of beauty, of pleasure, and so on.  While, contra Chopra, we know some things about consciousness—that it can be removed with anesthesia, that it can be altered in predicted directions with chemicals, and we know some things about where it sits in the brain—the “Hard Problem” is hard because while one can experience qualia, it’s hard to demonstrate them in others, and so to know when they’ve arisen. After all, I know I’m conscious, but you might be a zombie.

In a 2007 piece in Time Magazine, one of the best things I’ve read about consciousness (read it!), Steve Pinker first distinguishes the Hard from the “Easy” Problem of consciousness, and then talks about why the hard problem is hard:

What exactly is the Easy Problem? It’s the one that Freud made famous, the difference between conscious and unconscious thoughts. Some kinds of information in the brain–such as the surfaces in front of you, your daydreams, your plans for the day, your pleasures and peeves–are conscious. You can ponder them, discuss them and let them guide your behavior. Other kinds, like the control of your heart rate, the rules that order the words as you speak and the sequence of muscle contractions that allow you to hold a pencil, are unconscious. They must be in the brain somewhere because you couldn’t walk and talk and see without them, but they are sealed off from your planning and reasoning circuits, and you can’t say a thing about them.

The Easy Problem, then, is to distinguish conscious from unconscious mental computation, identify its correlates in the brain and explain why it evolved.

The Hard Problem, on the other hand, is why it feels like something to have a conscious process going on in one’s head–why there is first-person, subjective experience. Not only does a green thing look different from a red thing, remind us of other green things and inspire us to say, “That’s green” (the Easy Problem), but it also actually looks green: it produces an experience of sheer greenness that isn’t reducible to anything else. As Louis Armstrong said in response to a request to define jazz, “When you got to ask what it is, you never get to know.”

The Hard Problem is explaining how subjective experience arises from neural computation. The problem is hard because no one knows what a solution might look like or even whether it is a genuine scientific problem in the first place. And not surprisingly, everyone agrees that the hard problem (if it is a problem) remains a mystery.

. . . Many philosophers, like Daniel Dennett, deny that the Hard Problem exists at all. Speculating about zombies and inverted colors is a waste of time, they say, because nothing could ever settle the issue one way or another. Anything you could do to understand consciousness–like finding out what wavelengths make people see green or how similar they say it is to blue, or what emotions they associate with it–boils down to information processing in the brain and thus gets sucked back into the Easy Problem, leaving nothing else to explain. Most people react to this argument with incredulity because it seems to deny the ultimate undeniable fact: our own experience.

The most popular attitude to the Hard Problem among neuroscientists is that it remains unsolved for now but will eventually succumb to research that chips away at the Easy Problem. Others are skeptical about this cheery optimism because none of the inroads into the Easy Problem brings a solution to the Hard Problem even a bit closer. Identifying awareness with brain physiology, they say, is a kind of “meat chauvinism” that would dogmatically deny consciousness to Lieut. Commander Data just because he doesn’t have the soft tissue of a human brain. Identifying it with information processing would go too far in the other direction and grant a simple consciousness to thermostats and calculators–a leap that most people find hard to stomach. Some mavericks, like the mathematician Roger Penrose, suggest the answer might someday be found in quantum mechanics. But to my ear, this amounts to the feeling that quantum mechanics sure is weird, and consciousness sure is weird, so maybe quantum mechanics can explain consciousness.

And then there is the theory put forward by philosopher Colin McGinn that our vertigo when pondering the Hard Problem is itself a quirk of our brains. The brain is a product of evolution, and just as animal brains have their limitations, we have ours. Our brains can’t hold a hundred numbers in memory, can’t visualize seven-dimensional space and perhaps can’t intuitively grasp why neural information processing observed from the outside should give rise to subjective experience on the inside. This is where I place my bet, though I admit that the theory could be demolished when an unborn genius–a Darwin or Einstein of consciousness–comes up with a flabbergasting new idea that suddenly makes it all clear to us.

The Hard Problem, then, is to demonstrate when you’ve produced subjective sensations, and to distinguish that from simple input-output dynamics that, for instance, can occur in computers or zombies. But I don’t think a solution is beyond our ken. Perhaps there are brain interventions in an individual that can eliminate parts of subjective sensation, and which can then feed into a general and perhaps testable theory of how we get sensations.

But perhaps the problem will remain unsolved, or, as Dennett thinks, isn’t a problem at all (I disagree).

That, however, is completely irrelevant to Chopra’s “challenge” for two reasons.  First, his challenge implicitly thinks that our failure to understand consciousness means that it has a paranormal explanation. This is really a “woo of the gaps” approach, whereby any scientific problem that has defied explanation must have a paranormal or supernatural solution. This is of course analogous to “God of the gaps” arguments, in which anything we don’t understand is imputed to God. Theologians often suggest supernatural solutions to difficult problems like consciousness, morality, and the laws of physics. They once suggested them for things like lightning and evolution, too—until science filled the gaps with naturalistic explanations.  Chopra is simply a Theologian of Woo, and his mistake is the one identified by Robert G. Ingersoll in one of my favorite quotes:

“No one infers a god from the simple, from the known, from what is understood, but from the complex, from the unknown, and incomprehensible. Our ignorance is God; what we know is science.”

In this case, Chopra infers not a god but woo: immaterial and non-naturalistic forces beyond our ken—the stuff he makes his living touting. And why not offer a million dollars for other hard questions, like why the constants of physics are what they are instead of something else?

Second, Chopra’s challenge fails to parallel Randi’s in an important way. Randi is simply asking for someone to demonstrate paranormal phenomena like ESP, telekinesis, or remote viewing. He’s not asking their advocates to explain them. It’s a lot easier to demonstrate ESP than to explain it, although neither ESP nor other paranormal phenomena have been demonstrated. Chopra, on the other hand, asks for an explanation of consciousness, though a demonstration of it (all of our individual experiences) is dead easy. The latter is already at hand; the former may take decades to work out.

But this is all persiflage on Chopra’s part. If he wants to explain the “normal,” there are lots of questions he can ask. Why does mathematics work? Why are the speed of light in a vacuum and the force of gravity constants rather than variables?  Does that constancy prove something about the paranormal?

Chopra’s little demonstration is not only misguided, but embarrassing. He gives away the game when he bashes skeptics over and over again, chastising them for their “arrogance.” But who is more arrogant than Chopra, a man who constantly makes statements that either have no scientific basis or (as in his claim that we can permanently change our genes by changing our experience) are dead wrong? Real scientists like  Rudolph Tanzi should be embarrassed to be associated with Chopra.

My message to Chopra, who will be reading this for sure, is this: Deepak, you’re 66 years old, but in this video you act like a butthurt teenager.  Your challenge is ridiculous, and not worthy of consideration for even a second. After all, neuroscientists are already working on consciousness, and they don’t need your jibes to prompt them. If the paranormal does exist, as you implictly and explicitly claim over and over again, why hasn’t Randi demonstrated it? Why haven’t you won Randi’s prize?

Like this:

Like Loading...

Related

23 Jun 14:05

Alan Turing's Breakthrough Biological Model Confirmed: Who Knew he was a Biologist?

Pedro

Happy birthday, Alan Turing.

Through his mathematical model, Turing predicted six different patterns of morphological development, and while the science is certainly interesting, it is quite complex, so feel free to look into it on your own if you feel so inclined.  What’s important here is that Turing was the first to accurately describe the process and to predict the mechanism of development.  Of course, Alan Turing wasn’t a biologist.  He was a mathematician.  A wonderfully gifted mathematician, whose theorems have founded and advanced computer science to a degree that would otherwise have been impossible.
23 Jun 03:32

Google Is Putting $50 Million Toward Getting Girls to Code

Mindy Kaling and Chelsea Clinton want high school girls to embrace computer science.

The two women were on hand at a Google event in New York City on Thursday called Made With Code.

See also: 10 Excellent Platforms for Building Mobile Apps

Made With Code is a new Google initiative to motivate future female programmers. Only 18% of computer science degrees are earned by women, and Google is spending $50 million over the next three years to change those numbers.

More than 150 high school girls turned out for the event, including local chapters of the Girl Scouts of the USA, Black Girls Code and Girls Who Code. Kaling, a writer and actress, emceed the premiere, which brought in Google X Vice President Megan Smith, Clinton Foundation Vice Chair Chelsea Clinton, iLuminate creator Miral Kotb, Pixar Director of Photography Danielle Feinberg and UNICEF Innovation cofounder Erica Kochi.

Feinberg, who has worked on films like Brave, Finding Nemo and Monsters, Inc., spoke with the group about her early experiences with coding and how it has shaped her career. She also emphasized the importance of exposing girls to how fun coding can be.

Chelsea Clinton speaking

Chelsea Clinton addresses future female coders at the Made With Code event in New York City.

Image: Mashable Niki Walker

"This is something that's so important to me that I'm happy to do anything that they want me to do and be as involved as possible," she told Mashable. "I think it's much easier to connect with when you can see it and you can hear it and get all the senses involved."

Smith spoke about why she spearheaded the campaign to get girls into coding. She took a coding class in high school, but described it as boring. Her goal for Made With Code is to show girls that figuring out coding can be challenging but rewarding: "We invited you guys because we wanted to share the incredible world that we live in every day."

After each speaker shared her personal experiences with coding, Swedish house music duo Icona Pop gave a private performance. iLuminate's robotic dancers, wearing light-up suits, also performed, giving viewers a live example of how coding and dance can be combined.

Girls then had the opportunity to peruse multiple demonstrations of coding in action, ranging from the practical to the simply fun. Demos included programming — and trying on — virtual dresses, designing 3D-printable bracelets and creating a dancing avatar.

One attendee was Brittany Wenger, 19, who won the 2012 Google Science Fair for her app that accurately diagnoses breast cancer and is also minimally invasive.

"I was the only girl in my high school computer science class," Wenger told Mashable. "My teacher was a female, so it was great to be able to look up to her ... I just wish everybody had that same experience."

Made With Code isn't a one-time event. The website links girls seeking encouragement to coding meet-ups in their area. Google Helpouts also makes tutorials explaining coding concepts.

Miral Kotb, creator and CEO of iLuminate, shows off one of the dancers. All of the dancers' costumes have programmed lighting. Two girls in the front row, bottom right, are remote-controlling the costume on a tablet.

2012 Google Science Fair winner Brittany Wenger, second from right, mingles with fellow attendees.

An attendee interacts with Firewall, an interactive, programmed media installation by Michael P. Allison and Aaron Sherwood. The further one presses into the spandex surface, the more intense the light designs and music become.

A girl designs a bracelet -- which will be 3D printed by Shapeways, a NYC-based 3D printing marketplace and community -- as part of the accessory demo booth.

A 3D-printed bracelet reads, "Designed W/ Code."

As part of the event, girls were able to design custom 3D-printed bracelets, courtesy of Shapeways, a NYC-based 3D printing marketplace and community.

There were various code output platforms on display-- from fashion and design to humanitarian relief efforts.

Actress and writer Mindy Kaling hosted the event.

Have something to add to this story? Share it in the comments.

BONUS: 5 Surprising Facts About Google

23 Jun 03:27

Brian the Mentally Ill Bonobo, and How He Healed

A young bonobo at a sanctuary in the Congo (Reuters)

Things were not looking good for Brian. He'd been kept from the affection of his mother—and all other women—and raised alone by his father, who sexually traumatized him. Normal social interactions were impossible for him. He couldn't eat in front of others and required a series of repeated, OCD-like rituals before he'd take food. He was scared of any new thing, and when he got stressed, he'd just curl up into the fetal position and scream.

He also hurt himself over and over, tearing off his own fingernails and intentionally cutting his genitals. He was socially outcast, left to clap his hands, spin in circles, and stare blankly at walls by himself.

Still, some other bonobos were kind to him. Kitty, a 49-year-old blind female, and Lody, a 27-year-old male, spent time with Brian. When he panicked, Lody sometimes led him by the hand to their playpen at the Milwaukee County Zoo. 

After six weeks, the zookeepers knew they had to do something. They called Harry Prosen, who was the chair of the psychiatry department at the Medical College of Wisconsin, who took Brian on as his first non-human patient. 

Brian's story is one of many that Laurel Braitman tells in her new book, Animal Madness: How anxious dogs, compulsive parrots, and elephants in recovery help us understand ourselves, a survey of mental illness in animals and its relationship to our own problems. 

The individual stories in the book are compelling, and they lead towards an interesting conclusion about the way we project our own attributes onto other species. How much should we anthropomorphize animals like our pets or apes like Brian? As much as it helps us help them. If treating Brian like a human psychiatric patient helped Prosen treat the suffering animal, then it makes sense to project that level of humanness onto the creature. 

Prosen began with a full psychiatric history of Brian. He'd been born at the Yerkes National Primate Research Center at Emory University in Atlanta. Bonobos are famously, polymorphously, perversely sexual—but they don't generally engage in sexual violence. And yet Brian's father, who had suffered his own traumas as a research animal, sodomized Brian for years. During his seven years at Yerkes, Brian started to stick his own hand into his rectum, causing bleeding and—over time—thickening of the tissue there. It was a horrifying situation. 

In 1997, when Brian arrived, the bonobo crew at the Milwaukee County Zoo, which was the largest captive troop in the United States, was unusually stable and nice, seemingly due to the calming presence of two apes, Maringa, and Brian's friend, Lody. The troop had already helped other animals recover from mental disturbances, which is one reason that Brian had been sent there. But he seemed beyond natural recovery. 

Lody in 2005 (Richard Brodzeller/Zoological Society of Milwaukee)

Prosen first prescribed Paxil, to help with Brian's anxiety, occasionally supplemented by Valium, on the bad days. "The beauty of the drug therapy," Prosen told Braitman, "was that the other bonobos could start to see him for who he really was, which was really a cool little dude."

Meanwhile, Prosen and the zookeeping staff began Brian's therapy, focusing on making changes to their own behavior and his environment. They spoke quietly and moved slowly and consistently. No sudden movements or loud noises. They made each of his days exactly the same, and only introduced new things slowly and deliberately. They had Brian hang out with apes who were younger than him, so that he could learn what he'd never been taught as a kid: play. 

"Interacting with adult females, to whom he’d had no exposure as a youngster, caused him all sorts of anxiety," Braitman writes. "This was confusing to the rest of the troop because Brian looked like an eight- or nine-year-old young male, but developmentally he acted like a five- or six-year-old."

By 2001, after four hard years of therapy and improvement, Brian had begun to integrate into the Milwaukee troop. The zookeepers saw it as significant that a new mother let him touch her 10-day-old baby, and over the the next few years, his behavior became more and more socially aware. They peg his 16th birthday, in 2006, as the time when he "started acting his age." He loves carrying around the babies in the troop, and even managed to have his own children. And, as his keeper Barbara Bell recalled, he went off Paxil at some point, after he took to sharing it (!) with the other apes. 

As the years went by, Lody grew old and frail. Brian began to take on the older male's leadership role within the troop. And when Lody died in 2012, Brian became one of the group leaders. It was a remarkable transformation for a sick, disturbed young ape to have made. 

Brian's the one giving the backrub here.

Prosen, for his part, attributes Brian's growth to Lody and Kitty, the blind female who helped him out in his earliest, darkest period. While his therapy and the pharmaceuticals did some good, it was the community of zookeepers and animals working together that seems to have gotten him on the path to social integration. "Empathy knows no country, no species, is universal and has always been available,” Prosen said. “I discovered after arriving at the zoo that it belonged to the bonobos long before us.”

For Braitman, though, she does see something special in the way humans look out for other animals. So many of the traits that we thought distinguished our species have been found in other creatures, but we stand out among the animals for how we care for other species. Certainly not at all times or in all industries, but "humans are ridiculously special when it comes to our desire to intervene and heal the distress in many other species, especially our pets," Braitman told me. "I met people who'd turned their houses into rabbit sanctuaries and their ponds into otter rehab habitats."

We might not be the only tool-using mammals or the only species with a sense of self, but "the great lengths we go to help our animals is one thing that still sets us apart," she said. 

22 Jun 01:55

Hands-on with Canonical’s Orange Box and a peek into cloud nirvana

Enlarge / Looking down into the Orange Box. The ten naked NUCs are vertically mounted to the walls, while the central cavity includes a power supply, gigabit Ethernet switch, and shared storage.

Take ten high-end Intel NUCs, a gigabit Ethernet switch, a couple of terabytes of storage, and cram it all into a fancy custom enclosure. What does that spell? Orange Box.

Not the famous gaming bundle from Valve, though—this Orange Box is a sales demo tool built by Canonical. There are more than a dozen Orange Boxes in the wild right now being used as the hook to get potential Canonical users interested in trying out Metal-as-a-Service (MAAS), Juju, and other Canonical technologies. We got the chance to sit down with Canonical’s Dustin Kirkland and Ameet Paranjape for an afternoon and talk about the Orange Box: what it is, what it does, and more importantly, what it is not.

Enlarge / The rear of the Orange Box, showing Ethernet connections (they're attached to the internal switch and are used to expand the Orange Box—like if you wanted to cluster it with a twin), power, USB, and HDMI. The USB and HDMI connect to the control node.

First off, Canonical emphasized to Ars multiple times that it is not getting into the hardware business. If you really want to buy one of these things, you can have Tranquil PC build one for you (for £7,575, or about $12,700), but Canonical won’t sell you an Orange Box for your lab—there are too many partner relationships it could jeopardize by wading into the hardware game. But what Canonical does want to do is let you fiddle with an Orange Box. It makes for an amazing demo platform—a cloud-in-a-box that Canonical can use to show off the fancy services and tools it offers.

Inside the custom orange chassis are ten stripped Intel Ivy Bridge D53427RKE NUCs. Each comes with 16GB of RAM and a 120GB SSD, and they’re all connected to a gigabit Ethernet switch. One of the NUCs is the control node; its USB and HDMI ports are wired to the Orange Box’s rear panel, and that particular node also runs Canonical’s MAAS software. Its single unified internal 320W power supply runs on a single 110v outlet—even when all ten nodes are going flat-out, it doesn't require a second power plug.

The initial view of the Metal-as-a-Service (MAAS) console running on the first node. MAAS is an off-the-shelf Canonical tool, but here it's preconfigured to work with the Orange Box's NUCs.

  1. The initial view of the Metal-as-a-Service (MAAS) console running on the first node. MAAS is an off-the-shelf Canonical tool, but here it's preconfigured to work with the Orange Box's NUCs.

  2. The MAAS console showing the status of all the NUC nodes. None have been assigned any roles, so they show state "ready."

  3. Details on one of the physical nodes. Information displayed here (and in detail below in the "raw discovery data" section) is from a "lshw" run in an ephemeral PXE-booted Linux environment.

  4. Some of the nodes' properties can be edited, including the management protocol used (the NUCs use Intel AMT, though MAAS supports a number of other options).

  5. Nodes can be started, stopped, and deployed singly or in groups.

  6. These are the different boot images that can be deployed to this particular Orange Box's nodes.

For companies that are interested, Canonical is using the Orange Box with what it’s calling Jumpstart Training. For $10,000, Canonical will show up at your business with an Orange Box, provide two days of deep-dive training, and will then leave the box with you for two weeks. There are few enough actual Orange Boxes in existence that they weren’t able to give one to us to beat on, but Kirkland and Paranjape drove out from Canonical’s Austin office to Houston to give me an abbreviated demo and let me test drive the thing.

And here’s the first thing you have to realize about the Orange Box: it’s cool, but it’s not particularly noteworthy. It's a neat concept and it's very useful, but the capabilities that it demonstrates aren't unique to the form factor—Canonical is quick to point out that it's merely a convenient demo and training tool. The default image loaded onto node 0 gives you a MAAS console preconfigured to control the nine other NUC nodes in the Orange Box using Intel AMT, but this isn’t a special build of Canonical’s MAAS—this is an off-the-shelf application that’s being used here to demonstrate an integrated use case.

MAAS can be used to deploy a number of different operating system images to the Orange Box nodes, which happens via PXE. Node 0 also comes with Juju, Canonical's service deployment tool, which we’ll get into in a moment. By bringing together Juju and MAAS, Canonical can quickly show off some deeply complex deployments with actual hardware rather than relying on virtual machines or quickly spun-up EC2 demo instances.

Piercing the buzzword bingo

I know that no small number of Ars readers want to hear about the cool hardware and don’t care a whole lot about the software. That reaction—"Oh, cool, check this box out!"—is one of the main points of the Orange Box—the hardware is the hook Canonical is hoping to use to get people interested in seeing more (and it definitely worked on us). The hardware is startling and attractive (and orange!), and it’s a hell of a lab box, but it lacks essential features that it would need in order to be data center-ready—it only has a single internal power supply, its networking is non-redundant, and there's no inbuilt concept of hardware failover. That's OK, though: it's not supposed to be a production box.

We walked through a bunch of different installations and deployments, but before we dig into that, we need to define a few terms and describe why those terms are a big deal. Those of you with IT experience (and I’m sure that’s most of the audience!) can probably skip ahead a teeny bit, but taking time to make sure we’re all on the same page will be helpful once we really get going.

The Orange Box more than anything else shows potential Canonical customers how the Canonical way of managing servers and services works. There are two big "wow" moments you’re supposed to have while using the thing: the first comes when you see how all of Canonical’s tools work—and to the company's credit, the demos we ran through were slick and everything worked well. The second "wow," though, is when you realize that everything you do on the Orange Box demo unit using its built-in nodes can also be done at a much larger scale on real hardware or big virtual machines or on a public or private cloud provider’s gear—and, if everything works right, just as easily.

The Orange Box uses two key Canonical technologies: MAAS and Juju. MAAS, as we’ve described above, stands for Metal-as-a-Service. That name is a play on all the various thing-as-a-service names that cloud providers provide: whenever you see "thing-as-a-service," the "thing" is typically being marketed as a demand-based service or product that runs "in the cloud." Amazon’s EC2 service, for example, is an "infrastructure-as-a-service" cloud offering. You can activate as many EC2 virtual computers (infrastructure) as you need for a task, be it one or a hundred, and you pay for what you use.

There’s also storage-as-a-service (like Amazon S3, Rackspace, or OpenStack), software-as-a-service (like Salesforce.com), and many other things-as-a-service. The commonality between all of them is that you pay for what you use without worrying about the hardware underneath—it’s all in "the cloud."

Of course, "the cloud" is another misunderstood, horribly abused computing term. "The cloud" means different things to different people; it most often simply means "someone else’s servers," though a cloud can be "public" or "private"—it all depends on whose servers and where the line of abstraction is drawn. A company might store its data in a "private cloud," which could mean a big OpenStack deployment in its data centers on hardware that it owns; another company might use a public cloud or hybrid approach, keeping some data and apps internal and others running on Amazon EC2 or another provider.

It can be complicated. There’s no real magic in "the cloud," nor is it a particularly revolutionary concept, but it’s an easy word to say, and it crystallizes a bunch of different concepts in ways that "grid computing" and "time-sharing" failed to do.

Juju charms

There’s more, though—beyond MAAS, Canonical has Juju. Juju is a complex tool that can do a whole lot of stuff, but the simplest way to think of it is as "apt-get but for services." Put another way, if you wanted to install a Web server on Ubuntu, you could use apt-get; if you want to deploy an entire Web application stack, you could do it with Juju.

Juju uses "charms," which are scripted recipes that can install one or more packages and also link those packages together. A "MediaWiki" charm, for example, might install the Apache Web server package, then install the MySQL database package, then install the MediaWiki package from a third-party PPA, then configure Apache to properly serve PHP, and finally configure Apache and MySQL for MediaWiki, leaving you with a functional MediaWiki instance. Juju charms can also be linked together in "bundles," enabling you to deploy complex services consisting of many meshed and interacting applications.

The Juju console, also running on node 0. This drag-and-drop tool (which sits atop a much richer set of command line tools) lets you run Juju charms and bundles. Here, we have a MediaWiki charm and a MySQL charm with a relationship automatically established between them via their bundle.

  1. The Juju console, also running on node 0. This drag-and-drop tool (which sits atop a much richer set of command line tools) lets you run Juju charms and bundles. Here, we have a MediaWiki charm and a MySQL charm with a relationship automatically established between them via their bundle.

  2. Details on the MediaWiki instance we've just deployed to one of the Orange Box nodes.

  3. And here's MediaWiki, up and running with basically zero effort.

Juju is a cloud deployment tool, too. You could use Juju to deploy applications locally, but the tool is most properly used in conjunction with some kind of cloud layer—for example, you could tell Juju to deploy that MediaWiki charm to Amazon EC2, and after providing your EC2 credentials, you’d have a fully functional MediaWiki server on EC2 a few minutes later. Juju can deploy services to anything it has an API for—and that, of course, includes MAAS.

Servers are cattle, not pets

Which brings us back to the Orange Box and the demo. Kirkland noted that often, IT departments tend to treat servers as special pampered pets—you might buy four servers to function as a Hadoop cluster and then spend time polishing and tuning Hadoop for those four servers. And that’s all those servers are good for, too—they were bought to a certain spec, and they’re Hadoop servers until you’re done with your Hadoop project.

Servers, Kirkland explained, shouldn’t be special pets—servers should be cattle. In the Canonical universe, if you have MAAS in your data center, you should be able to deploy a Hadoop Juju charm out to four MAAS servers that fit your desired performance criteria and start work rather than requiring bespoke hardware. Further, you should be able to scale up and down as needed; if your Hadoop workload is low, you can destroy two of the four boxes and retask them to something else by deploying a different charm to them. If your Hadoop workload goes up dramatically, you can reclaim them or even spin up additional ones.

MAAS is aware of a node’s capabilities and specs—it uses an ephemeral PXE image to quickly boot and assay nodes before assigning them to the "ready" pool. Once inventoried, you can tag machines in the pool and manage them as groups if you need. For our Orange Box demo, all nine of the non-management nodes were visible and usable (as well as a KVM virtual machine running on node 0).

As with any vendor’s picture of how the data center should work, it makes for a compelling story—as long as everything is in the same world running the same management layer. To this end, Canonical has made sure that its MAAS tool can deploy not just Ubuntu images but other Linux and Windows images as well.

With very little effort, Kirkland and Paranjape quickly set up a small Hadoop cluster using Juju charms. Juju on the Orange Box is preconfigured to work with MAAS as its backend, so it passes instructions via MAAS’s RESTful API. MAAS actually does initial operating system installations and then executes the specific Juju scripts. We ended up with a four-node Hadoop environment within two minutes—one master node, two worker nodes, and one MySQL database node.

Kirkland kicked off a quick MapReduce job on our new Hadoop cluster, which took about seven minutes to run; after it completed, Kirkland quickly requisitioned the remaining five unused nodes in the Orange Box and transformed them into Hadoop compute nodes simply by dialing up the number of compute nodes in the Juju console. The process required no reconfiguration of any of the existing nodes—or rather, that reconfiguration was done transparently by the Juju charm. When it was done, we re-ran the same MapReduce job and it completed far faster (not at all surprising, since we’d almost tripled the amount of horsepower being thrown at the job).

After nuking our MediaWiki bundle, we built a small Hadoop cluster with another Juju bundle. The bundle also included MySQL and Hive to help tie Hadoop to MySQL.

  1. After nuking our MediaWiki bundle, we built a small Hadoop cluster with another Juju bundle. The bundle also included MySQL and Hive to help tie Hadoop to MySQL.

  2. Here are the details of the Hadoop namenode, including the Juju relations between it and the other active charms.

  3. Our MapReduce run took almost nine minutes with two nodes doing the computing. That's too slow. Let's crank it UP.

  4. Adjusting the number of datanodes from two to six with a keystroke. Juju handles the relationships, reconfiguring Hadoop and MySQL on the fly, without you having to actually do anything.

  5. MapReduce goes faster now!

  6. Here we're dropping a Ganglia charm into the mix. Ganglia is a monitoring tool, like Nagios or Munin. Once installed, you create a relationship between the new charm and the Hadoop cluster, and...

  7. ...Ganglia goes to work monitoring the nodes. Juju handles all of the underlying configuration.

I also got to watch Kirkland demonstrate a complex OpenStack deployment via MAAS and Juju to the Orange Box components. This is one of the same demos that was shown off last month at the OpenStack Summit; there were more than a dozen separate Juju charms executed as a bundle to install and configure OpenStack with a whole array of production capabilities. The removal of the admin from the setup process was a bit shocking—I’ve been through complex enterprise VMware deployments before, and watching OpenStack gamely set itself up before my eyes was amazing. We blasted through what would probably have been a two-day traditional deployment in minutes.

Even crazier, Kirkland informed me that if we wanted to, we could switch Juju’s backend away from the preconfigured MAAS setting and use our newly deployed OpenStack cluster as the basis for further Juju deployments. After all, at least in this instance, it’s all hardware rather than virtual machines. Cue the "Inception" music.

Here we have an OpenStack bundle. It's not an overly complex OpenStack setup, but it's still made up of a whole lot of charms and complex relationships. This kind of deployment might take days to roll out manually using a runbook; we did it in minutes.

  1. Here we have an OpenStack bundle. It's not an overly complex OpenStack setup, but it's still made up of a whole lot of charms and complex relationships. This kind of deployment might take days to roll out manually using a runbook; we did it in minutes.

  2. Logged into the OpenStack dashboard, running on the metal.

  3. This is a much larger OpenStack deployment bundle from Jujucharms.com, a Canonical site running the same Juju admin console as the Orange Box. Here you look through bundles and their relationships without actually deploying them to anything.

  4. Another complex Web application stack from Jujucharms.com, this one duplicating the production Web stack of a real customer.

A gateway drug

I only spent a few hours playing with the Orange Box, but it still told a pretty compelling story—life in Canonical cloud land looks pretty sweet. I found myself asking halfway through the demo why I didn’t just ditch the four servers in my closet and replace them with four NUCs running MAAS—surely down that path would lie computing nirvana.

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it's making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud. And it does make for a hell of an impressive demo environment—the slickly preconfigured MAAS + Juju setup lets Canonical throw down and show off dozens of different services and application configurations in a very short amount of time.

Enlarge / My very own cloud demo station! The Orange Box as it was shown to me in my living room, with the Juju console in the background showing an operational OpenStack deployment.

You certainly don’t need an Orange Box to start fiddling with MAAS or Juju—in fact, with Juju, you don’t really even need hardware at all. You can start deploying charms and bundles to Amazon EC2 or any other big cloud provider that Juju supports—or even make your own. If the demo showed me anything, it’s that Canonical is sitting on some attractive technology—and in keeping with the company’s roots, it’s all open source. Canonical would certainly love to sell you a support agreement—that’s how it gets revenue—but you don’t need to pay to play.

Still, all that being said, I wish my closet had an Orange Box in it. That thing is hella cool.

20 Jun 21:09

The Mathematical Dialect Quiz

by Ben Orlin

  1. What do you call a rigorous demonstration that a statement is true?
    1. If “proof,” then you’re a mathematician
    2. If “experiment,” then you’re a physicist
    3. If you have no word for this concept, then you’re an economist

  1. What do you call a slow, painful, computationally intense method of solving a problem?
    1. If “engineering,” then you’re a mathematician
    2. If “mathematics,” then you’re an engineer

  1. What do you call a person who is in their first job after a PhD?
    1. If “postdoc,” then you’re a mathematician or physicist
    2. If “assistant professor,” then you’re an economist
    3. If “wealthy,” then you’re a computer scientist
    4. If you have no word for a job after a PhD, then you’re in the humanities, and you have our condolences

  1. What do you call a calculator with graphing capabilities?
    1. If “an antique,” then you’re a computer scientist
    2. If “my precious,” then you’re an engineer
    3. If “the poor man’s Wolfram Alpha,” then you’re a mathematician
    4. If “kinda hard to use,” then you’re an honest mathematician

  1. How do you pronounce “Pythagorean”?
    1. If you pronounce it “pithAGorEan,” then you’re a mathematician
    2. If you pronounce it “PITHaGORean,” then you’re a physicist
    3. If you just mumble the word and hope no one notices, then you’re a TA

  1. What name do you use for the person who invented calculus?
    1. If “Leibniz,” then you’re a mathematician
    2. If “Newton,” then you’re a physicist
    3. If “magical wizard,” then you’re probably not ready for grad school

  1. What do you say after successfully proving your point beyond all doubt?
    1. If “QED,” then you’re a mathematician
    2. If “the prosecution rests,” then you’re a mathematician with a flair for drama
    3. If you do not believe proof beyond all doubt is possible, then you’re a scientist

  1. What do you call a simplified representation of reality, such as imagining a physical system with no friction or air resistance?
    1. If “a model,” then you’re a computer scientist
    2. If “an approximation,” then you’re an engineer
    3. If you call this “reality,” then you’re an economist

  1. How do you refer to a piece of work that suffers from one small but visible mistake?
    1. If “rough,” then you’re an engineer
    2. If “as good as it’s going to get,” then you’re a computer scientist
    3. If “worthless,” then you’re a mathematician

  1. What do you call a formal gathering of professionals from your field?
    1. If “a conference,” then you’re a physicist
    2. If “a start-up,” then you’re a computer scientist
    3. If “an advisory panel to the president,” then you’re an economist
    4. If “a game of D&D,” then you’re a mathematician

Thanks for reading! If you prefer bad gifs to bad drawings, you might also check out The Math Aficionado’s Guide to High Fives.


20 Jun 21:02

martinlkennedy: Pages from the Star Wars Question and Answer...





















martinlkennedy:

Pages from the Star Wars Question and Answer Book about Computers (1983). I’ve learned that C-3PO is good at designing Joy Division album covers and that in the future there will be giant mechanical mice!

Images courtesy of Paxton Holley. You can see the full set here

28 Nov 12:49

Doctor Who TARDIS Trash Can

by Adam
Doctor Who Tardis Trash Can Quit using those standard trash cans with their pathetic storage capacity that corresponds to their actual physical size – the Doctor Who TARDIS trash can is the only trash receptacle on the market that will easily store tons of your garbage and then dematerialize. Buy It $89.99 via ThinkGeek.com
19 Mar 01:12

Why there is no Hitchhiker’s Guide to Mathematics for Programmers

by j2kun

Do you really want to get better at mathematics?

Remember when you first learned how to program? I do. I spent two years experimenting with Java programs on my own in high school. Those two years collectively contain the worst and most embarrassing code I have ever written. My programs absolutely reeked of programming no-nos. Hundred-line functions and even thousand-line classes, magic numbers, unreachable blocks of code, ridiculous code comments, a complete disregard for sensible object orientation, negligence of nearly all logic, and type-coercion that would make your skin crawl. I committed every naive mistake in the book, and for all my obvious shortcomings I considered myself a hot-shot programmer! At leaa st I was learning a lot, and I was a hot-shot programmer in a crowd of high-school students interested in game programming.

Even after my first exposure and my commitment to get a programming degree in college, it was another year before I knew what a stack frame or a register was, two more before I was anywhere near competent with a terminal, three more before I fully appreciated functional programming, and to this day I still have an irrational fear of networking and systems programming (the first time I manually edited the call stack I couldn’t stop shivering with apprehension and disgust at what I was doing).

I just made it so this function returns to a *different* place than where it was called from.

I just made this function call return to a *different* place than where it was called from.

In a class on C++ programming I was programming a Checkers game, and my task at the moment was to generate a list of all possible jump-moves that could be made on a given board. This naturally involved a depth-first search and a couple of recursive function calls, and once I had something I was pleased with, I compiled it and ran it on my first non-trivial example. Low and behold (even having followed test-driven development!), I was hit hard in the face by a segmentation fault. It took hundreds of test cases and more than twenty hours of confusion before I found the error: I was passing a reference when I should have been passing a pointer. In particular (and this is the aggravating part, as most programmers know), the fix required the change of about 4 characters. Twenty hours of work for four characters! Once I begrudgingly verified it worked (of course it worked, it was so obvious in hindsight), I promptly took the rest of the day off to play Starcraft.

Of course, as every code-savvy reader will agree, all of this drama is part of the process of becoming and strong programmer. One must study the topics incrementally, make plentiful mistakes and learn from them, and spend uncountably many hours in a state of stuporous befuddlement before one can be considered an experienced coder. This gives rise to all sorts of programmer culture, unix jokes, and reverence for the masters of C that make the programming community so lovely to be a part of. It’s like a secret club where you know all the handshakes. And should you forget one, a crafty use of awk and sed will suffice.

"Semicolons of Fury" was the name of my programming team in the ACM collegiate programming contest. We placed Cal Poly third in the Southern California Regionals.

“Semicolons of Fury” was the name of my programming team in the ACM collegiate programming contest. We placed Cal Poly third in the Southern California Regionals, and in my opinion our success was due in large part to the dynamics of our team. I (center, in blue) have since gotten a more stylish haircut.

Now imagine someone comes along and says,

“I’m really interested in learning to code, but I don’t plan to write any programs and I absolutely abhor tracing program execution. I just want to use applications that others have written, like Chrome and iTunes.”

You would laugh at them! And the first thing that would pass through your mind is either, “This person would give up programming after the first twenty minutes,” or “I would be doing the world a favor by preventing this person from ever writing a program. This person belongs in some other profession.” This lies in stark opposition to the common chorus that everyone should learn programming. After all, it’s a constructive way to think about problem solving and a highly employable skill. In today’s increasingly technological world, it literally pays to know your your computer better than a web browser. (Ironically, I’m writing this on my Chromebook, but in my defense it has a terminal with ssh. Perhaps more ironically, all of my real work is done with paper and pencil.)

Unfortunately this sentiment is mirrored among most programmers who claim to be interested in mathematics. Mathematics is fascinating and useful and doing it makes you smarter and better at problem solving. But a lot of programmers think they want to do mathematics, and they either don’t know what “doing mathematics” means, or they don’t really mean they want to do mathematics. The appropriate translation of the above quote for mathematics is:

“Mathematics is useful and I want to be better at it, but I won’t write any original proofs and I absolutely abhor reading other people’s proofs. I just want to use the theorems others have proved, like Fermat’s Last Theorem and the undecidability of the Halting Problem.”

Of course no non-mathematician is really going to understand the current proof of Fermat’s Last Theorem, just as no fledgling programmer is going to attempt to write a (quality) web browser. The point is that the sentiment is in the wrong place. Mathematics is cousin to programming in terms of the learning curve, obscure culture, and the amount of time one spends confused. And mathematics is as much about writing proofs as software development is about writing programs (it’s not everything, but without it you can’t do anything). Honestly, it sounds ridiculously obvious to say it directly like this, but the fact remains that people feel like they can understand the content of mathematics without being able to write or read proofs.

I want to devote the rest of this post to exploring some of the reasons why this misconception exists. My main argument is that the reasons have to do more with the culture of mathematics than the actual difficulty of the subject. Unfortunately as of the time of this writing I don’t have a proposed “solution.” And all I can claim is a problem is that programmers can have mistaken views of what mathematics involves. I don’t propose a way to make mathematics easier for programmers, although I do try to make the content on my blog as clear as possible (within reason). I honestly do believe that the struggle and confusion builds mathematical character, just as the arduous bug-hunt builds programming character. If you want to be good at mathematics, there is no other way.

All I want to do with this article is to detail why mathematics can be so hard for beginners, to explain a few of the secret handshakes, and hopefully to bring an outsider a step closer to becoming an insider. So read on, and welcome to the community.

Travelling far and wide

Perhaps one of the most prominent objections to devoting a lot of time to mathematics is that it can be years before you ever apply mathematics to writing programs. On one hand, this is an extremely valid concern. If you love writing programs and designing software, then mathematics is nothing more than a tool to help you write better programs.

But on the other hand, the very nature of mathematics is what makes it so applicable, and the only way to experience nature is to ditch the city entirely. Indeed, I provide an extended example of this in my journalesque post on introducing graph theory to high school students: the point of the whole exercise is to filter out the worldly details and distill the problem into a pristine mathematical form. Only then can we see its beauty and wide applicability.

Here is a more concrete example. Suppose you were trying to encrypt the contents of a message so that nobody could read it even if they intercepted the message in transit. Your first ideas would doubtlessly be the same as those of our civilization’s past: substitution ciphers, Vigenere ciphers, the Enigma machine, etc. Regardless of what method you come up with, your first thought would most certainly not be, “prime numbers so big they’ll make your pants fall down.” Of course, the majority of encryption methods today rely on very deep facts (or rather, conjectures) about prime numbers and other mathematical objects (“group presentations so complicated they’ll orient your Mobius band,” anyone?). But it took hundreds of years of number theory to get there, and countless deviations into other fields and dead-ends.

Of course there are other examples much closer to contemporary fashionable programming techniques. One such example is boosting. While we have yet to investigate boosting on this blog, the basic idea is that one can combine a bunch of algorithms which perform just barely better than 50% accuracy, and collectively they will be arbitrarily close to perfect. In a field dominated by practical applications, this result is purely the product of mathematical analysis.

And of course boosting in turn relies on the mathematics of probability theory, which in turn relies on set theory and measure theory, which in turn relies on real analysis, and so on. One could get lost for a lifetime in this mathematical landscape! And indeed, the best way to get a good view of it all is to start at the bottom. To learn mathematics from scratch. The working programmer simply doesn’t have time for that.

What is it really, that people have such a hard time learning?

Most of the complaints about mathematics come understandably from notation and abstraction. And while I’ll have more to say on that below, I’m fairly certain that the main obstacle is a familiarity with the basic methods of proof.

While methods of proof are semantical by nature, in practice they form a scaffolding for all of mathematics, and as such one could better characterize them as syntactical. I’m talking, of course, about the four basics: direct implication, proof by contradiction, contrapositive, and induction. These are the loops, if statements, pointers, and structs of rigorous argument, and there is simply no way to understand the mathematics without a native fluency in this language.

The “Math Major Sloth” is fluent. Why aren’t you?

So much of mathematics is built up by chaining together a multitude of absolutely trivial statements which are amendable to proof by the basic four. I’m not kidding when I say they are absolutely trivial. A professor of mine once said,

If it’s not completely trivial, then it’s probably not true.

I can’t agree more with this statement. Of course, there are many sophisticated proofs in mathematics, but an overwhelming majority of (very important) facts fall in the trivial category. That being said, trivial can be sometimes relative to one’s familiarity with a subject, but that doesn’t make the sentiment any less right. Drawing up a shopping list is trivial once you’re comfortable with a pencil and paper and you know how to write (and you know what the words mean). There are certainly works of writing that require a lot more than what it takes to write a shopping list. Likewise, when we say something is trivial in mathematics, it’s because there’s no content to the proof outside of using definitions and a typical application of the basic four methods of proof. This is the “holding a pencil” part of writing a shopping list.

And as you probably know, there are many many more methods of proof than just the basic four. Proof by construction, by exhaustion, case analysis, and even picture proofs have a place in all fields of mathematics. More relevantly for programmers, there are algorithm termination proofs, probabilistic proofs, loop invariants to design and monitor, and the ubiquitous NP-hardness proofs (I’m talking about you, Travelling Salesman Problem!). There are many books dedicated to showcasing such techniques, and rightly so. Clever proofs are what mathematicians strive for above all else, and once a clever proof is discovered, the immediate first step is to try to turn it into a general method for proving other facts. Fully flushing out such a process (over many years, showcasing many applications and extensions) is what makes one a world-class mathematician.

An entire book dedicated to the probabilistic method of proof, invented by Paul Erdős and sown into the soil of mathematics over the course of his lifetime.

Another difficulty faced by programmers new to mathematics is the inability to check your proof absolutely. With a program, you can always write test cases and run them to ensure they all pass. If your tests are solid and plentiful, the computer will catch your mistakes and you can go fix them.

There is no corresponding “proof checker” for mathematics. There is no compiler to tell you that it’s nonsensical to construct the set of all sets, or that it’s a type error to quotient a set by something that’s not an equivalence relation. The only way to get feedback is to seek out other people who do mathematics and ask their opinion. In solo, mathematics involves a lot of backtracking, revising mistaken assumptions, and stretching an idea to its breaking point to see that it didn’t even make sense to begin with. This is “bug hunting” in mathematics, and it can often completely destroy a proof and make one start over from scratch. It feels like writing a few hundred lines of code only to have the final program run “rm -rf *” on the directory containing it. It can be really. really. depressing.

It is an interesting pedagogical question in my mind whether there is a way to introduce proofs and the language of mature mathematics in a way that stays within a stone’s throw of computer programs. It seems like a worthwhile effort, but I can’t think of anyone who has sought to replace a classical mathematics education entirely with one based on computation.

Mathematical syntax

Another major reason programmers are unwilling to give mathematics an honest effort is the culture of mathematical syntax: it’s ambiguous, and there’s usually nobody around to explain it to you. Let me start with an example of why this is not a problem in programming. Let’s say we’re reading a Python program and we see an expression like this:

foo[2]

The nature of (most) programming languages dictates that there are a small number of ways to interpret what’s going on in here:

  1. foo could be a list/tuple, and we’re accessing the third element in it.
  2. foo could be a dictionary, and we’re looking up value associated to the key 2.
  3. foo could be a string, and we’re extracting the third character.
  4. foo could be a custom-defined object, whose __getitem__ method is defined somewhere else and we can look there to see exactly what it does.

There are probably other times this notation can occur (although I’d be surprised if number 4 didn’t by default capture all possible uses), but the point is that any programmer reading this program knows enough to intuit that square brackets mean “accessing an item inside foo with identifier 2.” Part of the reasons that programs can be very easy to read is precisely because someone had to write a parser for a programming language, and so they had to literally enumerate all possible uses of any expression form.

The other extreme is the syntax of mathematics. The daunting fact is that there is no bound to what mathematical notation can represent, and much of mathematical notation is inherently ad hoc. For instance, if you’re reading a math paper and you come across an expression that looks like this

\delta_i^j

The possibilities of what this could represent are literally endless. Just to give the unmathematical reader a taste: \delta_i could be an entry of a sequence of numbers of which we’re taking arithmetic j^\textup{th} powers. The use of the letter delta could signify a slightly nonstandard way to write the Kronecker delta function, for which \delta_i^j is one precisely when i=j and zero otherwise. The superscript j could represent dimension. Indeed, I’m currently writing an article in which I use \delta^k_n to represent k-dimensional simplex numbers, specifically because I’m relating the numbers to geometric objects called simplices, and the letter for those is  a capital \Delta. The fact is that using notation in a slightly non-standard way does not invalidate a proof in the way that it can easily invalidate a program’s correctness.

What’s worse is that once mathematicians get comfortable with a particular notation, they will often “naturally extend” or even silently drop things like subscripts and assume their reader understands and agrees with the convenience! For example, here is a common difficulty that beginners face in reading math that involves use of the summation operator. Say that I have a finite set of numbers whose sum I’m interested in. The most rigorous way to express this is not far off from programming:

Let S = \left \{ x_1, \dots, x_n \right \} be a finite set of things. Then their sum is finite:

\displaystyle \sum_{i=1}^n x_i

The programmer would say “great!” Assuming I know what “+” means for these things, I can start by adding x_1 + x_2, add the result to x_3, and keep going until I have the whole sum. This is really just a left fold of the plus operator over the list S.

But for mathematicians, the notation is far more flexible. For instance, I could say

Let S be finite. Then \sum_{x \in S} x is finite.

Things are now more vague. We need to remember that the \in symbol means “in.” We have to realize that the strict syntax of having an iteration variable i is no longer in effect. Moreover, the order in which the things are summed (which for a left fold is strictly prescribed) is arbitrary. If you asked any mathematician, they’d say “well of course it’s arbitrary, in an abelian group addition is commutative so the order doesn’t matter.” But realize, this is yet another fact that the reader must be aware of to be comfortable with the expression.

But it still gets worse.

In the case of the capital Sigma, there is nothing syntactically stopping a mathematician from writing

\displaystyle \sum_{\sigma \in \Sigma} f_{\Sigma}(\sigma)

Though experienced readers may chuckle, they will have no trouble understanding what is meant here. That is, syntactically this expression is unambiguous enough to avoid an outcry: \Sigma just happens to also be a set, and saying f_{\Sigma} means that the function f is constructed in a way that depends on the choice of the set \Sigma. This often shows up in computer science literature, as \Sigma is a standard letter to denote an alphabet (such as the binary alphabet \left \{ 0,1 \right \}).

One can even take it a step further and leave out the set we’re iterating over, as in

\displaystyle \sum_{\sigma} f_{\Sigma}(\sigma)

since it’s understood that the lowercase letter (\sigma) is usually an element of the set denoted by the corresponding uppercase letter (\Sigma). If you don’t know greek and haven’t seen that coincidence enough times to recognize it, you would quickly get lost. But programmers must realize: this is just the mathematician’s secret handshake. A mathematician would be just as bewildered and confused upon seeing some of the pointer arithmetic hacks C programmers invent, or the always awkward infinite for loop.

for (;;) {
   ;
}

And once the paper you’re reading is over, and you start reading a new paper, chances are their conventions and notation will be ever-so-slightly different, and you have to keep straight what means what. It’s as if the syntax of a programming language changed depending on who was writing the program!

Perhaps understandably, the frustration that most mathematicians feel when dealing with varying syntax across different papers and books is collectively called “technicalities.” And the more advanced the mathematics becomes, the ability to fluidly transition between high-level intuition and technical details is all but assumed.

The upshot of this whole conversation is that the reader of a mathematical proof must hold in mind a vastly larger body of absorbed (and often frivolous) knowledge than the reader of a computer program.

At this point you might see all of this as my complaining, but in truth I’m saying this notational flexibility and ambiguity is a benefit. Once you get used to doing mathematics, you realize that technical syntax can make something which is essentially simple seem much more difficult than it is. In other words, we absolutely must have a way to make things completely rigorous, but in developing and presenting proofs the most important part is to make the audience understand the big picture, see intuition behind the symbols, and believe the proofs. For better or worse, mathematical syntax is just a means to that end, and the more abstract the mathematics becomes, the more flexiblility mathematicians need to keep themselves afloat in a tumultuous sea of notation.

You’re on your own, unless you’re around mathematicians

That brings me to my last point: reading mathematics is much more difficult than conversing about mathematics in person. The reason for this is once again cultural.

Imagine you’re reading someone else’s program, and they’ve defined a number of functions like this (pardon the single-letter variable names, I’m just don’t like “foo” and “bar”).

def splice(L):
   ...

def join(*args):
   ...

def flip(x, y):
   ...

There are two parts to understanding how these functions work. The first part is that someone (or a code comment) explains to you in a high level what they do to an input. The second part is to weed out the finer details. These “finer details” are usually completely spelled out by the documentation, but it’s still a good practice to experiment with it yourself (there is always the possibility for bugs, of course).

In mathematics there is no unified documentation, just a collective understanding, scattered references, and spoken folk lore. You’re lucky if a textbook has a table of notation in the appendix. You are expected to derive the finer details and catch the errors yourself. Even if you are told the end result of a proposition, it is often followed by, “The proof is trivial.” This is the mathematician’s version of piping output to /dev/null, and literally translates to, “You’re expected to be able to write the proof yourself, and if you can’t then you’re not ready to continue.”

Indeed, the opposite problems are familiar to a beginning programmer when they aren’t in a group of active programmers. Why is it that people give up or don’t enjoy programming? Is it because they have a hard time getting honest help from rudely abrupt moderators on help websites like stackoverflow? Is it because often when one wants to learn the basics, they are overloaded with the entirety of the documentation and the overwhelming resources of the internet and all its inhabitants? Is it because compiler errors are nonsensically exact, but very rarely helpful? Is it because when you learn it alone, you are bombarded with contradicting messages about what you should be doing and why (and often for the wrong reasons)?

All of these issues definitely occur, and I see them contribute to my students’ confusion in my introductory Python class all the time. They try to look on the web for information about how to solve a very basic problem, and they come back to me saying they were told it’s more secure to do it this way, or more efficient to do it this way, or that they need to import something called the “heapq module.” When really the goal is not to solve the problem in the best way possible or in the shortest amount of code, but to show them how to use the tools they already know about to construct a program that works. Without a guiding mentor it’s extremely easy to get lost in the jungle of people who think they know what’s best.

As far as I know there is no solution to this problem faced by the solo programming student (or the solo anything student). And so it stands for mathematics: without others doing mathematics with you, its very hard to identify your issues and see how to fix them.

Proofs, Syntax, and Community

For the programmer who is truly interested in improving their mathematical skills, the first line of attack should now be obvious. Become an expert at applying the basic methods of proof. Second, spend as much time as it takes to clear up what mathematical syntax means before you attempt to interpret the semantics. And finally, find others who are interested in seriously learning some mathematics, and work on exercises (perhaps a weekly set) with them. Start with something basic like set theory, and write your own proofs and discuss each others’ proofs. Treat the sessions like code review sessions, and be the compiler to your partner’s program. Test their arguments to the extreme, and question anything that isn’t obvious or trivial. It’s not uncommon for easy questions with simple answers and trivial proofs to create long and drawn out discussions before everyone agrees it’s obvious. Embrace this and use it to improve.

Short of returning to your childhood and spending more time doing recreational mathematics, that is the best advice I can give.

Until next time!

Like this:

Like Loading...

. Bookmark the

.

19 Mar 01:06

Fotografias de Longa Exposição – Trilhas de Estrelas

by Redação Garotas Nerds

Quando o cenário da sua janela é o espaço, não fica muito difícil tirar fotografias incrivelmente lindas, mas o astronauta da NASA Don Pettit conseguiu dar um passo a mais e tirar fotos ainda mais impressionantes de trilhas de estrelas graças à técnica de longa exposição.

A técnica de longa exposição consiste em deixar o obturador da máquina fotográfica aberto por mais tempo, assim, ele recebe mais luz do que o normal, o que gera alguns efeitos especiais e permite a captura de coisas que em geral não podem ser vistas. Esse tipo de técnica também permite a captura da trilha que as estrelas fazem no céu durante a passagem do tempo e que, obviamente, não conseguimos ver a olho nu.

Foi exatamente assim que o astronauta Don Pettit tirou essas incríveis fotos enquanto esteve a bordo da Estação Espacial Internacional, em 2012. Ele explica como conseguiu fazer esse registro mesmo com as limitações de tempo de exposição da câmera:

Minhas imagens de trilhas de estrelas são feitas através de um tempo de exposição de cerca de 10 a 15 minutos. No entanto, com as modernas câmeras digitais, 30 segundos é a maior exposição possível, devido ao ruído do detector eletrônico acabar efetivamente nevando a imagem. Para atingir as exposições mais longas, eu faço o que muitos astrônomos amadores fazem. Tiro várias fotos com exposição de 30 segundos e, em seguida, ‘empilho-as’ usando um software de imagem, produzindo, assim, uma exposição mais longa.

Abaixo, veja mais algumas das incríveis imagens e, para ver todas, acesse o álbum do Flickr.

Via

19 Mar 00:29

The UN’s Silence on Discrimination and Violence Against Non-Believers

by Michael De Dora

Note: the United Nations Human Rights Council this Friday will close its monthlong 22nd regular session in Geneva, Switzerland. Three secularist groups—the Center for Inquiry, International Humanist and Ethical Union, and British Humanist Association—have been active at the session, delivering statements and lobbying on issues such as freedom of conscience. CFI’s main representative in Geneva is Dr. Elizabeth O’Casey. The following is a report from Ms. O’Casey regarding efforts by CFI and the IHEU to include in a resolution on freedom of religion or belief language referencing non-religious persons. 

By Dr. Elizabeth O’Casey 

At the end of last week, the European Union (EU), supported by the South American group, tabled a resolution at the United Nations Human Rights Council (UNHRC) on Freedom of Religion or Belief. Now, whilst any resolution that highlights the importance of protecting every individual’s right to freedom of religion or belief is always extremely welcome, what is shamefully inadequate about this resolution is that it expressly excludes any concern regarding discrimination and violence against non-believers.

Elizabeth O'Casey

I attended one of the informal consultation meetings on this resolution and argued for ‘non-believers’ as a category meriting explicit mention within the context of discrimination, particularly given the severity and breadth of such discrimination against non-believers and people of no religion around the world. In a follow-up email, sent on behalf of the Center for Inquiry (CFI) and the International Humanist and Ethical Union (IHEU), we suggested that a mention of non-believers might be included specifically in a paragraph on violence against individuals. Where the original paragraph expressed deep concern at “the increasing number of acts of violence, directed against individuals, including persons belonging to religious minorities,” we thought it apposite to mention non-believers as amongst such persons.

The EU’s rather unconsidered reply to our suggestion was that non-believers are already covered in the resolution, by the ‘right to belief.’ In response to this almost flippant, if true, statement by the EU, I want to note two things.

Firstly, it might be pointed out that whilst the rights of religious minorities are, as with non-believers, covered by the right to freedom of religion or belief, in their case the authors of this resolution saw fit (rightly) to mention explicitly this group of people in order to highlight the types of discrimination they suffer from. I am left baffled as to why the authors did not treat non-believers with the same consideration.

Secondly, what the EU representative and her colleagues have failed to understand is the importance, within the context of this type of resolution, of expressly underlining the institutionalised persecution and discrimination that non-believers are subjected to globally, as well as making explicit ‘non-believers’ as a category of persons who come under the protection of any right to freedom of religion or belief. The necessity to make this fact plain is demonstrated through the apparent ignorance of it by so many governments across the world; an ignorance manifested through, for example, the use of the death penalty as a potential punishment for atheism in seven countries, and the effective criminalisation of atheism in many more.

In specifically excluding non-believers within the context of violent discrimination, the EU’s resolution fails to acknowledge the bravery and resilience of millions of non-believers. One example of such a non-believer is Kacem El Ghazzali, a colleague of mine at the UNHRC. Kacem is a Moroccan refugee now living in Switzerland who, after posting several articles online about his atheism, was a victim of death threats, physical violence, and discrimination by agents of the Moroccan State. Kacem and the many others like him deserve recognition in forums such as the UNHRC. The EU’s refusal to include any direct reference to non-believers as a group meriting special protection from religious intolerance fails people like Kacem. It also fails the one in six people across the globe who do not self-identify as religious. That’s a lot of people to fail.

 

15 Mar 19:26

Minute Physics Explains The Whole Universe In Under Three Minutes [Video]

by Ian Chant

Never let it be said that the folks at Minute Physics shy away from delving into the really big questions. In the latest episode, the gang takes on the daunting query “What is the Universe?” The answer, predictably, is, like, whoa, man, raising more questions about things like whether we can really know that parts of the Universe we can’t observe actually exist, and if concepts like the future or mathematics are part of the universe we live in, or something else entirely. It’s a bit of a head trip, but since you’ve probably got today off anyway, go ahead and ponder the nature of all things great and small from your bed this morning. You’ve earned it.

(via Minute Physics)

Relevant to your interests

03 Nov 17:25

It's not okay to threaten to rape people you don't like: Why I stand with Rebecca Watson

by Maggie Koerth-Baker

Every now and then, I am reminded of how lucky I am. I'm lucky that none of my readers has ever responded to a comment I made, which they didn't like, by calling me ugly. I'm lucky that they've never called me a cunt or a whore. I'm lucky that they've never threatened to rape me and then called me a humorless bitch when I pointed out how messed up that was. In general, the worst comments I've ever had directed to me, here, were from people accusing me of being a paid shill for Big Conspiracy, which is just funny.

But that shouldn't be luck, guys. My experience should not represent a minority experience among the female science bloggers I know. (And it is.) I shouldn't have to feel like thanking you, the BoingBoing readers, for being kind enough to not treat me like shit just because I'm a lady person.

Treating people with respect should not be a controversial position. It should not be a mindblowingly crazy idea to point out the fact that women are quite often treated as objects and, thus, have to deal with a lot more potentially threatening situations than men do. It shouldn't be offensive to say, hey, because of that fact, it's generally not a good idea to follow a woman you've never spoken to into an elevator late at night and ask her to come to your hotel room. Chances are good that you will make her feel threatened, rather than complimented.

And, even if you disagree, it's still totally not okay to threaten to rape people you disagree with. Seriously. Other than the specific bit about rape, we should have all learned this in preschool. And the fact that so many of the people engaging in this behavior claim to be rational thinkers and members of a community I strongly identify with ... well, that just makes me want to vomit. I honestly don't know what else to say.

Read Rebecca Watson's full article, Sexism in the Skeptic Community



01 Nov 19:23

Ghosts and Stuff

by Katie McKissick






31 Oct 02:39

Jordan and Roberts: ‘Carpool Buddies of Doom’ creators … of doom

by Michael May

Rafer Roberts is running a Kickstarter campaign for his Plastic Farm comic, but that doesn’t mean he can’t do other awesome things, too, like illustrate a three-page comic Justin Jordan (The Strange Talent of Luther Strode) wrote about Thanos, Darkseid and some very special coffee. I was going to lament that I can’t actually buy Thanos’ “Titan Love Letter” blend, but after hearing him and Darkseid talk about it, maybe it’s best that way. Grab a cup of your own favorite joe, read the rest of the comic, then go help Roberts out with a Kickstarter donation. That sounds like a great way to start the day.

25 Oct 15:53

How Do I Get Rid of the DRM on My Ebooks and Video?

by Thorin Klosowski

How Do I Get Rid of the DRM on My Ebooks and Video?I just heard about the woman whose Kindle ebooks were wiped when her account was suspended, and it got me thinking: Do I really own anything that I've bought with DRM? It seems like I could lose it at any time, or lose the ability to view something just because I switched devices. How can I get rid of the DRM so I can keep my own backups?

Sincerely,
Sick of DRM

Dear SoD,
It's always a bit disheartening to hear about content getting changed or removed because of DRM. Combined with the news a few years ago that Amazon could wipe content they didn't have the license for, DRM is increasingly an issue with further reaching implications than simply keeping you from pirating content. Wiping content is one issue—but DRM also usually locks the media to your device or service—which means you often can't transfer your library between different devices. With that in mind, let's first take a quick look at what you're actually buying when you buy DRM content before digging into how to remove DRM from videos and books.

What You're Buying When You Buy Digital Content

How Do I Get Rid of the DRM on My Ebooks and Video?Just as a quick primer here, we should note what exactly it is you're purchasing when you buy digital content, and why this problem exists in the first place. When you purchase digital content, you're typically just buying a license to use it. You do not "own" the books or media you purchase in traditional terms. For example, here's Amazon's Terms of Use (bolding ours):

Upon your download of Kindle Content and payment of any applicable fees (including applicable taxes), the Content Provider grants you a non-exclusive right to view, use, and display such Kindle Content an unlimited number of times, solely on the Kindle or a Reading Application or as otherwise permitted as part of the Service, solely on the number of Kindles or Supported Devices specified in the Kindle Store, and solely for your personal, non-commercial use. Kindle Content is licensed, not sold, to you by the Content Provider.

Most Terms of Use at other digital stores follow Amazon here, and they all also have something similar to this little caveat:

In addition, you may not bypass, modify, defeat, or circumvent security features that protect the Kindle Content.

So, just so you know: removing DRM from ebooks and videos is typically against the Terms of Use. Most services like Amazon or Barnes and Noble allow you to store your books or video purchases in the cloud so you can download them again later. But they're always restricted to their apps, and that's a bummer.

How to Remove DRM from Ebooks (and Back Up Your Library Permanently)

How Do I Get Rid of the DRM on My Ebooks and Video?The easiest way to strip DRM from Kindle books (and Barnes and Noble, Adobe Digital Content, etc) is with the free ebook software Calibre, DRM removal plugins, and a copy of the Kindle desktop software (PC/Mac). These directions are for Kindle, but will work with Barnes and Noble, Adobe Digital Editions, and older formats. Here's what you need to do:

  1. Download Calibre, the the plugins, and the Kindle Desktop software.
  2. Unzip the contents of the plugin directory.
  3. Open up Calibre and click on "Preferences."
  4. Navigate to "Plugins" under the "Advanced" section.
  5. Click "Load Plugin from file," and select K3MobiDeDRM_v04.5_plugin.zip from the directory you just unzipped.
  6. Load up the Kindle app on your Mac or Windows computer and download all your books from Amazon.
  7. Navigate to either C:\Users\[your username]\Documents\My Kindle Content on Windows or [your username]\My Documents\My Kindle Content on Mac.
  8. Your books aren't named in any meaningful way, so just drag all the *.azw files into Calibre.
  9. After a short wait (depending on the size of your library), Calibre will finish importing the books. Now you have a DRM-free backup of all your books on your computer.

It's a little convoluted, but once you get the hang of it, Calibre is a solid way to backup all your purchased ebooks.

How to Remove DRM from Movies and TV Shows

How Do I Get Rid of the DRM on My Ebooks and Video?Movies are slightly easier to remove DRM from then ebooks, but the process isn't free. For this, we like Tunebite ($25) on Windows, or Noteburner M4V Converter ($50) on Mac. Both will cost you a little money, but removing DRM from video files downloaded from the likes of Amazon or iTunes is an incredibly simple process.

Alternately, you can record directly from your computer using a screen recording tool (any of these five will do). You will, of course, have to wait for the entire movie since it operates essentially like dubbing, but if you already use screen recording tools it's a free option for backing up your movies.

As a few commenters have noted in the discussion, Requiem is also an excellent way to remove DRM from iTunes downloads. The process is pretty self-explanatory. Download a version of Requiem that corresponds to your version of iTunes, and open up the video files you want to remove the DRM with.

The Case for Abandoning DRM Content Completely

How Do I Get Rid of the DRM on My Ebooks and Video?While it is possible to strip away all the DRM from the content you already own, it's even better to buy from sellers that don't use DRM in the first place. That's easier said than done, of course, as most major stores (iBooks, iTunes, Amazon, Barnes and Noble, etc) all use DRM for their content.

For books, crowdfunded efforts like Story Bundle or Humble Ebook Bundle are great ways to get DRM-free books, but they're not the same as a store. Occasionally, you can also grab books directly from a publisher like Tor that come DRM-free, or grab older books from Project Gutenberg.

The same goes for videos. Much like books, you have to go directly to a performer to get a DRM-free video. For instance, comedians Louis CK, Jim Gaffigan, and Aziz Ansari both released their comedy shows free of DRM, but those types of instances are few and far between (occasionally smaller films, like Indie Game: The Movie will do it). However, if you really want to avoid DRM, it's still easier to buy a physical disc and rip it yourself—whether it's a DVD or Blu-Ray disc. They still technically have DRM, but it's the easiest to bypass.

The fact is, while piracy is certainly an issue, so is user experience. You want to pay money for something knowing you'll be able to use it in the future regardless of what device you have in your hand, and DRM often makes that hard. Author Cory Doctorow describes this problem pretty bluntly as: "If you can't open it, you don't own it." Worse, when you're locked into a certain store or hardware, you end up getting stuck on the upgrade treadmill because your content is locked to one type of device. Sure, Amazon's Kindle app exists across platforms, but if you buy a Nook, you all of a sudden have no books. Same if you buy movies from iTunes and switch away from the Apple TV. And there's always a (slight) chance any given service will stop providing support. Then you're really left in the lurch. Photo by Gavin Baker.

Sincerely,
Lifehacker

Have a question or suggestion for Ask Lifehacker? Send it to tips+asklh@lifehacker.com.

Title photo by Austin Parrish Thomas.

24 Oct 02:52

Editing women into Wikipedia | Not Exactly Rocket Science

Last Friday, a group of volunteers gathered in the Royal Society in London to edit female scientists into the history books—or at least, into Wikipedia. Their goal was to start fixing the online encyclopaedia’s comparatively thin information about women in science and technology.

I attended the “edit-a-thon”, reporting for Nature. Before I turned up, I wondered about the rationale behind holding a specific event to edit Wikipedia, which can be done at any time and place. I also wondered how much the editors could accomplish in just 3.5 hours. Both concerns were addressed on the day, and in the piece. Take a look. Also, there was an Ada Lovelace/Wikipedia cake.

24 Oct 02:28

Yum! Curiosity Eats Mars Dirt

Chemin

NASA's Mars rover Curiosity has tasted Mars dirt for the first time, dropping a small sample of regolith into its onboard chemistry lab.

The sample -- measuring no bigger than a crushed baby aspirin tablet and sieved to remove any large pieces of debris -- was part of the third scoop of material collected from a sandy ridge of material near a location known as "Rock Nest," a collection of dark rocks that the rover is currently parked next to.

mars red
WATCH VIDEO: WHY IS MARS RED?

BIG PIC: Curiosity Takes Scoop of Mars Regolith Ripple

Previous scoops of Mars material were used by Curiosity to "clean" its robotic arm-mounted scoop (an instrument called 'Collection and Handling for In-Situ Martian Rock Analysis,' or, simply, CHIMRA). By scooping the fine Mars soil, and then shaking it while inside CHIMRA, mission scientists could ensure that the metallic surfaces were scrubbed clean of any contaminants of Earth origin. The scoop then dumped the first two scoops -- as you would spit out mouthwash after cleaning your teeth.

Now that the Chemistry and Mineralogy (CheMin) instrument has the sample, it will begin analysis to identify the minerals contained within the regolith, ultimately helping scientist understand whether or not the Red planet ever could have supported habitats for basic forms of life.

"We are crossing a significant threshold for this mission by using CheMin on its first sample," said Curiosity's project scientist, John Grotzinger of the California Institute of Technology in Pasadena. "This instrument gives us a more definitive mineral-identifying method than ever before used on Mars: X-ray diffraction. Confidently identifying minerals is important because minerals record the environmental conditions under which they form."

ANALYSIS: Curiosity Takes Second Mars Scoop for Dust Bath

Interestingly, the analysis of regolith has been delayed by the sighting of curious pieces of bright material spotted on the ground during CHIMERA's work.

The first object has been identified as likely a piece of plastic that fell off the rover. But during inspection of the troughs dug by the scoop, small flecks of a bright material have been spotted.

As there was concern these smaller unidentified pieces of material could also be contamination from the rover, mission scientists decided to delay dropping any samples into CheMin until it could be deciphered whether they are native to Mars, or perhaps more debris dropped from the rover. The consensus is that these bright flecks are indeed Martian.

"We plan to learn more both about the spacecraft material and about the smaller, bright particles," said Curiosity Project Manager Richard Cook of NASA's Jet Propulsion Laboratory, Pasadena. "We will finish determining whether the spacecraft material warrants concern during future operations. The native Mars particles become fodder for the mission's scientific studies."

ANALYSIS: Mystery Mars Object is Curiosity's Litter

NASA's one-ton, nuclear-powered Mars Science Laboratory landed inside Gale Crater onto a plain called Aeolis Palus on Aug. 5 and is currently roving its way to a 5.5 kilometer (3.4 mile) high mountain in the middle of the crater called Aeolis Mons (known colloquially as Mt. Sharp).

There's nothing better than uncovering a mystery so early on in Curiosity's planned 2-year long mission!

Source: JPL

Image: Raw image from Curiosity's MastCam of the CheMin hatch (closed) as seen on Curiosity's deck on Sol 71 -- a sample is now inside. Credit: NASA/JPL-Caltech