Dwayne Johnson smashed through the great wall of news this week, rushing over and lifting us up in a powerful but tender overhead press, carrying us toward the dreamland he lives in where everyone is hardworking, great-looking, and nice as hell.
Bless GQ for sending Caity Weaver on the enviable mission to profile Dwayne Johnson, and for their art department for thinking what we’re all thinking: If a celebrity had to be president, wooing the electorate with charm and charisma, why not elect Johnson, who appears to excel in every area our current president lacks?
Evan Osnos recently reported that “other than golf, [Trump] considers exercise misguided, arguing that a person, like a battery, is born with a finite amount of energy.” A finite amount of energy? Dwayne Johnson is a solar-powered, clean running beast of infinite energy and charisma.
If you are a child, good luck getting past Dwayne Johnson without a high five or some simulated roughhousing; if you’re in a wheelchair, prepare for a Beowulf-style epic poem about your deeds and bravery, composed extemporaneously, delivered to Johnson’s Instagram audience of 85 million people; if you’re dead, having shuffled off your mortal coil before you even got the chance to meet Dwayne Johnson, that sucks—rest in peace knowing that Dwayne Johnson genuinely misses you. For Johnson, there are no strangers; there are simply best friends, and best friends he hasn’t met yet.
He’s always concerned about you. Are you drinking enough water?
“I just want to make sure you’re hydrated,” he says, picking up a cool, clear cylinder of Voss. He twists open the seal fast and hard, like he’s wringing the neck of a punk who disrespected the troops (he loves the troops—we love the troops—proud to be an American, troops, troops, troops), and hands me the bottle.
Pundits and pollsters, get ready. Dwayne Johnson is going to crush your likability scores.
How does an untrained actor jump from a cameo to a starring role in the span of a year, while never even quitting his day job? Then, as now, Johnson tested well in what the film industry refers to as “all four quadrants”: old men, young men, old women, and young women. “[He] is as close to guaranteeing you butts in the seat as anybody can be,” NBCUniversal vice chairman Ron Meyer told me.
Broadly, the quadrants thing means that everyone likes him. Specifically, it suggests that if Johnson’s personal magnetism were any stronger, birds in his vicinity might plummet from the sky, their internal navigation mechanisms thrown off by the force of his personality.
But hey, he’s still a unique guy! There’s even a box on the census form just for Dwayne Johnson.
His own racial blend (black and Samoan) means he is blessed with skin the color of graham crackers, a perfectly roasted marshmallow, and Abraham Lincoln on the penny. It’s a rare combination. In the last census, the number of Americans identifying as “Native Hawaiian and Other Pacific Islander” (a blanket term that includes people who trace their lineage to Samoa, Fiji, Tahiti, and lots of other small, warm islands) plus “Black or African American” was just 50,308. A figure so low it rounds out to 0.0 percent of the total U.S. population, though a more gracious person might say “less than 0.1 percent.” In other words, if you meet a 45-year-old half-black, half-Samoan man living in the United States, the odds are shockingly high he will be Dwayne Johnson
He’s also one of the people. He’s got that song in his head! No, not that song, the other one!
Johnson’s song from the Moana soundtrack, “You’re Welcome,” was not nominated for any awards, but he sings it under his breath all day anyway, just because he really loves that song.
And like any good politician, he loves the troops. Like, a lot. Like, really really a lot.
Johnson frequently takes to social media to thank members of the armed forces, specifically and in aggregate, for their service. In his patriotic hands, anything can—and will—become a tribute to the armed forces. In March, he was “grateful” to share the “big news” on Instagram that he would be portraying “a disabled US War Vet and former FBI Hostage Rescue Team leader” in an upcoming movie about “the world’s largest skyscraper—that’s on fire.” Johnson wrote that his character in this demented summer blockbuster was “inspired by the thousands of disabled US veterans and war heroes I’ve had the honor of shaking hands with over the years.”
He’s got the inside track on world events like the death of Osama Bin Laden, beating Obama’s announcement by an hour. (His cousin is a Navy SEAL.)
To this day, Johnson refuses to disclose how he got wind of this ultra-classified mission. “The tweet was actually supposed to come out at the same time the president was making his speech,” he says, “but the moment I sent that out, I got word that now we’ve delayed the speech a little bit. I was like, ‘Ooooh. Okay.’ ”
No really, he has what it takes to be president. If our current president is a goldfish who can’t remember the castle in his own bowl, Dwayne Johnson is the elephant who never forgets.
Barring the adoption of policy points that are completely unhinged (like spending $8 billion to build a colony in the earth’s core—though, if anyone could do that, it would be Dwayne Johnson), there’s much to suggest Johnson could chart a fast and furious ride to the White House. He’s a quick study with boundless attention to detail. Beyond his popularity—and the fact that his head often looks like a big, round smiley face—he’s got a politician’s warm, deep voice, which projects authority, capability, and strength. And he possesses a startling steel-trap memory. Johnson simply remembers everything about people: biographical details, offhand anecdotes, entire conversations. It’s the quality that allows him to treat everyone like a close friend, the silent secret to his supercharged charm.
And while our current president sleeps only four to five hours a night, Dwayne Johnson is very concerned about your sleep schedule. Are you getting enough? Is it quality sleep?
Weeks after I first met Johnson, I wake up to a direct message on Twitter. @TheRock is on the road and just wanted to alert me to the fact that his hotel carries GQ. The message is decorated with fist-bump and hang-loose emojis.
“Hope you’re great and sleeping soundly!!” he writes.
The wild thing is, I believe he means it literally. If anyone hopes his fellow man is great and sleeping soundly!! it’s Dwayne Johnson. And if he can convince a few million more people of that, everyone’s best friend is going to be president.
In conclusion, Johnson 2020. And Caity Weaver for Veep.
Did you guys read this article? I thought it was really, really insightful.
In The Stranger, Ijeoma Oluo traveled to Spokane, Washington to sit at the kitchen table with Rachel Dolezal, who is jobless and living in a month-to-month rental, hoping her new book will start something, anything, to get money coming in.
Oluo thinks the meeting may have been a bad idea, (“I was half sure that this interview was my worst career decision to date”) but the result is a master class in confrontation, in which the hard questions are asked, answers are pushed, and frustrations laid bare.
This piece was written by a black woman working w a black editor (@mudede ). If you are wondering why a piece like this didn't happen sooner
There was a moment before meeting Dolezal and reading her book that I thought that she genuinely loves black people but took it a little too far. But now I can see this is not the case. This is not a love gone mad. Something else, something even sinister is at work in her relationship and understanding of blackness.
There is a chapter where she compares herself to black slaves. Dolezal describes selling crafts to buy new clothes, and she compares her quest to craft her way into new clothes with chattel slavery. When I ask what she has to say to people who might be offended by her comparing herself to slaves, Dolezal is indignant almost to exasperation…
“I’m not comparing the struggles, okay? Because I never said that my life was the same. I never said that it was the equivalent of slavery, of chattel slavery. I did work and bought all my own clothes and shoes since I was 9 years old. That’s not a typical American childhood life,” she says. “I worked very hard, but I didn’t resonate with white women who were born with a silver spoon. I didn’t find a sentence of connection in those stories, or connection with the story of the princess who was looking for a knight in shining armor.”
I am beginning to wonder if it isn’t blackness that Dolezal doesn’t understand, but whiteness.
I am surprised when I come suddenly upon the Wall.
Just after dawn on a late November day the North Pennines air is rigid with cold. A thick hoar of frost blankets pasture and hedge, reflecting white-blue light back at an empty sky. The last russet leaves clinging to a copse of beech trees set snug in the fold of a river valley filter lazy, hanging drifts of smoke from a wood fire. The sunlight is a dreamy veil of cream silk.
I am surprised when I come suddenly upon the Wall. I have not followed the neat, fenced, waymarked route from the little village of Gilsland which straddles the high border between Northumberland and Cumbria, but struck directly across country and, with the sun in my eyes, I do not see Hadrian’s big idea until I am almost in its shadow. Sure, it stops you in your tracks. It is too big to climb over (that being the point), so I walk beside it for a couple of hundred yards. The imperfect regularity of the sandstone blocks is mesmerizing, passing before one’s eyes like the holes on a reel of celluloid. This film is an epic: eighty Roman miles, a strip cartoon story that tells of military might, squaddy boredom, quirky native gods, barbarian onslaught, farmers, archaeologists, ardent modern walkers and oblivious livestock. I am somewhere between Mile 49 and Mile 50, counting west from Wallsend near the mouth of the River Tyne. The gap in the Wall, when I find it, is made by the entrance to Birdoswald fort. Birdoswald: where the Dark ages begin.
There is no one here but me on this shining day. The farm that has stood here in various guises for around fifteen hundred years is now a heritage center. On a winter weekday I have Birdoswald to myself. Just me and the shimmering light and the odd chough cawing away in a skeletal tree. In places the stone walls of this once indomitable military outpost still stand five or six feet high. Visible, in its heyday, from all horizons, the Roman fort layout was built on a well-tested model: from above, it is the shape of a playing card, with the short sides facing north and south. Originally designed so that three of the six gates (two in each long side, one at either end in the center) protruded beyond the line of the Wall, the fort was not so much part of a defensive frontier, more a launching pad for expeditions, patrols and forays in the lands to the north. Rome did not hide behind its walls; the legions did not cower. Any soldier from any part of the Empire would have known which way to turn on entering the gate; where the barrack rooms would be; where to find the latrines and bread ovens; how to avoid the scrutiny of the garrison commander after a late-night binge or an overnight stay in the house of the one of the locals. Uniformity was part of the Roman project.
Any native on any frontier would get to know the layout too. The British warrior might, in those first years of the Wall’s existence during the 120s, try to attack it; when that failed he would herd his livestock through its gates to his summer pastures and pay a tax on his sheep or cattle. British women would barter their homespun goods for ironwares or posh crockery; one day their sons might be recruited into its garrison. The Brittunculi or Little Britons, as a Vindolanda tablet suggests they were called by their imperial betters, might grow to like the idea of the Empire.
Outside Birdoswald fort, to the east, the frosted surface of a smooth, grassy field conceals the magnetic traces from geophysical mapping of a small village, or vicus, which grew up alongside. These vici were native British settlements, clinging like limpets to their military protectors, supplying them with goods and services and probably with children, wanted or unwanted. Much the same thing happens in frontier provinces today. You see it on documentaries filmed in the dodgier parts of Afghanistan—only there the Taliban regards such integration or fraternization as a capital crime. When the Western troops leave, and they are leaving as I write, one fears for the safety of the inhabitants. When Rome came to this frontier, she came to stay.
To the south, the line of the Wall, and this fort, are protected by the deep, sinuous gorge of the River Irthing, the western of two rivers which between them create the Tyne–Solway gap linking east and west coasts. This gap has been a lowland route through the Pennines for many thousands of years. Two generations before Hadrian the Romans built a road along this line, known in later times as the Stanegate, so that they could rapidly deploy troops along its length. Much of that road is still in use, or at least passable. And long after Hadrian, General Wade had his redcoats build a road following much the same route to keep Jacobites at bay. The gap between the headwaters of the Rivers Irthing and South Tyne is narrow: no more than four miles. Near Greenhead, just to the south-east of Gilsland, is the watershed boundary, the pass, a choke-point through which modern road and railway, ancient Wall and eighteenth-century military road must squeeze.
To the north, Birdoswald—Banna to the Romans—looks onto a landscape of boggy mires, dispersed sheep farms and conifer plantations, with another twenty odd miles before the modern border is reached. It is an odd thought: this land, so often fought over, has been at peace for two hundred and fifty years. The old border garrisons of Carlisle and Newcastle have almost lost their walls; standing on either coast halfway up the island of Britain, they are just like other modern cities. Had Scotland voted for independence in September 2014, that defunct border could have been revived; we might have had customs posts, and police on either side might have spent their time chasing smugglers once again. It may still happen. It would amuse the legionary builders of this place to think of their imperial customs booths being reopened after nearly two millennia; it would not surprise them. Sometimes borders are self-defining.
* * *
These things don’t just happen.
During the middle of the fourth century, long before the traditional date of ad 410, when the Roman administration dissolved the province of Britannia, the roof of one of the granaries (horrea) at Birdoswald collapsed. These things don’t just happen. The Roman auxiliary cohorts who had been stationed here for two hundred years relied on periodic resupply from the coast ports and on storing the fruits of each year’s harvest. Leaky roofs and military efficiency don’t go together; so either slackness was creeping in or the fort had been abandoned. That’s how it seems at first sight. But the subtle text of stratified deposits read by archaeologists tells a more complex story. The fort was not abandoned; and when, in the 360s, a huge barbarian onslaught threatened to overwhelm the province, Rome and her generals responded. After the north granary at Birdoswald lost its roof, its stones and tiles were used elsewhere. The floor of a second granary, immediately to the south, was reflagged, its under-floor heating flues blocked—to keep out draughts, or rats? The centurions’ quarters were remodeled to allow for the construction of a building with a small apse—a by then fashionable Christian church, perhaps. The abandoned north granary was used as a rubbish dump, but part of the main street frontage was refaced with dressed stone and a new barrack block was built. Neither slackness nor abandonment explains the halving of the fort’s storage facilities. More likely, the realities of the frontier zone changed.
Rome was not a static force any more than the British Empire was in its day. Three hundred years is a long time. As the empire stretched, then overstretched, as emperors’ fortunes waxed and waned, as troops and political interests migrated from one distant land to another, local commanders became increasingly autonomous. Centrally organized lines of supply, overly bureaucratic and too bloated to adapt to local realities, were superseded or bypassed. Pay wagons turned up with hard cash less often. Commanders took an active role in supplying garrisons from their immediate hinterland; probably they got more involved in the administration of local politics. The relationship between occupying force and native elite became more intimate, the integration more complete. By the middle of the fourth century Wall garrisons consisted mostly of troops called Limitanei, that is, frontiersmen. Many of the men had probably been born within a few miles of Birdoswald; their fathers had been soldiers before them. They spoke the native language known as Brythonic—an early form of Welsh—and were embedded in the native communities of the Wall zone. They revered a suite of local divinities and the odd imperial god, especially Jupiter. After Constantine, who was declared emperor in York in 306, they may have felt inclined, or obliged, to rationalize their pantheon and worship the one true God Jehovah and his charismatic, earthbound son.
The garrison commanders were an elite cadre. They could afford to modify their official quarters with bespoke trappings like Christian chapels or bath houses. Their dress and social class set them apart culturally and politically. In many places they brought with them in their deployments personal retinues from far-flung provinces of the geriatric, obese empire, now disintegrating and under threat of being overrun. The rigid formal structure of the old imperial army, mirrored in the fixed, square-shaped identikit forts of the first and second centuries, became flexible, individualized. The emergence of a vernacular tradition, blending native and foreign with a distinct local cultural flavor, meant that each fort and town was recognizable by its own regional idiosyncrasies. Many of the late imperial commanders had been recruited from the northern boundaries of the Empire from whose ongoing conflicts and edginess fine warriors were raised. Many of them must have spoken Germanic tongues.
* * *
We are talking barn conversions.
No one noticed the beginning of the Dark Age in Britain. It started in different ways and at different times in different places. Rome never lost interest in these islands; they bore valuable minerals, their soils were fertile and their conquest had been a prestigious triumph of the imperial project of the first century ad. But distance stretches and thins one’s interest; as the Empire reformed in the East and as Western emperors focused their attentions on Gaul, Hispania and Germania, it became harder to keep up with what was happening in Britannia: the distant relative was slowly lost to the family. In the towns of Roman Britain decline may have begun as early as the third century, as local elites increasingly favored the country life and became bored with Rome’s urban experiment, its high-maintenance sewage and water supplies, tedious civic snobbery and the tendency of the urban proletariat to riot on almost any pretext. On the coasts, vulnerable to a Continental penchant for piratical raiding, life from the early fourth century onwards could be uncomfortable even with the presence of the imperial navy to watch Britain’s shores facing Ireland and Saxony, Frisia and Pictland. In the cultured, decadent luxury of the Cotswolds, where superb country villas sat in an ordered, fertile and bucolic landscape, reality might not have dawned until the middle of the fifth century when effete toga-wearing Romano-British aristos woke up to find revolting peasants stealing their prize heifers and touting the heresy of a suspiciously liberal British-born cleric called Pelagius.
At Birdoswald the moment can, in its way, be quite precisely identified, with fifteen hundred years of hindsight to draw on. At the very end of the fourth century the south granary, renovated some decades before, had a succession of hearths built into its west end. When excavated, their ashes were found to contain some rather nobby personal items: a green glass ring, a gold and glass earring. More importantly for the excavator, Tony Wilmott, there was a worn coin of the Emperor Theodosius (reigned 388–85), which gives some idea of the date after which these fires were in use. Archaeologists, when finds and structures tell them they are excavating deposits of the fifth century, get a shiver down their spine: these moments are desperately rare.
Hearths seem odd things to have in a granary: fire and grain are a dangerous combination. Was the garrison now so compact, were the other buildings in such a state of disrepair, that the garrison commander had moved himself and his family into the grain store and fitted it out as domestic quarters because it was the only building left that would keep out the winter weather? Were these people cowering among ruins?
There is another way of looking at it. We are talking barn conversions. Not so much retreating to the corner of a bar because it’s the only building with a roof; more likely, the commander liked to have the company of his men for good cheer and fireside stories in those long nights of winter when they talked of the old days, of battles and life on campaign. The barn still had a good solid roof, maintained because it was where all the local produce came in and had to be stored for the year ahead. This produce was no longer paid for in cash (these late coins of Theodosius were about the last to make it to Britain from the imperial mints); the natives were required to give a few days’ labor and to donate a proportion of their harvest and other agricultural produce—say, a tenth. The commander still had his own quarters, nice bath suite, private chapel—wife from a local well-to-do family or perhaps an exotic Dacian bride who played a quasi-diplomatic role in the local community and kept a small but tasteful salon, as British army wives sometimes did in colonial India. Often, and especially when there had been a good harvest or on the quarter days of the native festive calendar when communal gatherings were de rigueur, it seemed right to have a feast in the barn, to share the land’s bounty, dispense a little justice and a few trinkets from the bazaars of Alexandria and reinforce old and prospective loyalties. The man who sat at the center of the long feasting benches was more of a local worthy and judge than a garrison commander. One is tempted to use the word ‘squire’. Gifts were exchanged; promises made; eligible young men and women affianced. Poems were composed and sung, wine and local mead consumed: drinking horns for the men, Rhineland glass beakers for the commander and his wife. Understanding the rules of patronage was becoming just as important as running a tight ship or ruthlessly enforcing the imperial law.
This cozy scenario takes us well into the fifth century, when there is virtually no narrative history for the British Isles, just rumors of civil war and raiding Saxons, plague and famine. Traders from the Continent came to these parts less frequently. We know that Gaulish bishops visited, retaining their solidarity with the British church long after secular link shad been severed. But no emperor came after the departure of Constantine III in 407. Rarely does archaeology have anything meaningful to say about the two centuries after 400: there are no new coins to date the layers; almost no inscriptions, and those few that do exist are difficult to date. The pottery found in native settlements might just well be that of the Iron Age. Even radiocarbon dating is unreliable for these centuries and, unless you are in the peaty bogs of Ireland, wood rarely survives to be dated by its tree rings. The fifth century existed all right—we just can’t see it. It is like the Dark Matter which fills our universe but can’t be seen or measured. The record falls silent, even if echoes and rumors of echoes are heard across the Channel in the courts of Byzantium, Arles and Ravenna.
Almost the earliest indigenous written account of events in Britain after the end of Rome is a note in an Easter calendar called the Annales Cambriae, its only surviving copy belonging to hundreds of years after the event. Under Anno I, which historians believe equates to the year ad 447, is a simple, bleak Latin entry: Dies tenebrosa sicut nox: ‘days as dark as night’. That just about says it all, even if it is an obscure reference to some distant volcano or a really terrible winter.
* * *
The works of giants.
At Birdoswald life went on, perhaps until the first years of the sixth century. On top of the defunct north granary a timber replacement was erected using the old stone foundations to give it solidity and a floor. Years passed. Finally, a similar structure was erected in more or less the same place, only it was designed to line up with a remodeled gate on the west side of the fort. The new building, imposing in its dimensions and constructed using great hewn crucks, looks for all the world like one of the timber halls of poetic legend: the Heorot of Beowulf. And if, at times, the walls were hung with spears and shields and the air rang to the sound of drunken song and poetry, with boasts of victories and laments for fallen comrades, it was, after all, still a barn. Were its carousing warriors and petty chiefs, its quartermasters and poets Romans, Britons or Anglo-Saxons? Who can say? Did they themselves know or care? And was the successor to the commander in whose name this grand design was built a rival, an imposed replacement, or a son?
Even the casual visitor to Birdoswald can’t fail to be impressed by the solidity of the foundations where the north granary has been excavated, its footings and buttresses consolidated. Where the post pads for the new timber hall of the fifth century were sited English Heritage has installed great round logs, like oversized telegraph poles, standing a few feet high to give an idea of the size and layout. It is a crude reconstruction, and yet viscerally effective in demonstrating the moment and mindset that changed Roman into Early Medieval. What is particularly striking is that the new timber hall was much wider than the old granary. If the south-granary-cum-barn-cum-feasting-hall was mere adaptation, with a hearth at one end and perhaps a partition in the middle, then the new hall built over the ruins of the north granary was a more ambitious vision, designed for the commander of the fort (be he a dux of his cohort, a war-band leader or petty tribal chief) to sit in the center of one of the long sides with his companions ranged on benches on either side, a glowing fire before him in the center and, perhaps, with doors at either end. This is truly a building in which the mythical Beowulf would have felt at home. And we must suspect that this was not an isolated structure: the Birdoswald of ad 450–500 was a busy place.
The fort at Banna, high in the Pennines, may just have an even more potent role to play in our history. St. Patrick claimed, in his Confessio, that he had been born and brought up in a place called Bannavem Taberniae, son of a local landowner called Calpurnius whose father, Potitus, had been a priest. His vita is difficult to date, but some time in the middle and later decades of the fifth century is plausible. Several modern scholars believe that this place name should read Banna Venta Berniae: the ‘settlement at Banna in the land of the high passes’. Berniae shares its root with the name Bernicia, the Anglo-Saxon kingdom of north Northumbria. That Patrick, taken by slaves to Ireland, should have unwillingly launched his epic career as an Irish patron saint at this remote, beautiful spot, is quite a thought.
And then there is Arthur. Historical references to the legendary Romano-British warlord are very few: a list of twelve battles; a great victory recorded at a place called Badon (perhaps Bath in Somserset); a death notice; a possible mention in a battle poem. Arthur may be, as many historians have argued, an irrelevance, a distraction. There are ‘southern’ Arthurs and ‘northern’ Arthurs, never mind the medieval romantic hero of Camelot. Those who favor the northern version argue that the notice of his death in 537 during the ‘Strife of Camlann’ places him on the Roman Wall; for Camlann seems to be derived from Camboglanna. It used to be thought, erroneously, that this Roman fort, mentioned in the very late Roman list of imperial postings called the Notitia Dignitatum, must be Birdoswald. Now it is accepted that it should be Castlesteads, some seven miles to the west. Either way, there are those who would place both Patrick and Arthur on this stretch of the Wall between the fifth and sixth centuries.
Narrative histories do not get us very far towards an understanding of these islands in the centuries after Roman rule. An early sixth-century British monk called Gildas wrote of civil wars, of invasion, fire, sword and famine (and mentioned a victory at Badon without naming the victor), but nothing of the everyday comings and goings which sustained life. The Kentish Chronicle, fascinating in its melodrama but of doubtful veracity, tells of the foolish British tyrant Vortigern who made a fatal drunken deal with two Saxon pirates (a pretty girl was involved) and sold Britain’s soul and future. Even Bede, the greatest of our early historians, writing nearly three centuries later, covers the nearly one hundred and fifty years after 450 with a single paragraph. The odd memorial stone offers us the name of a Christian priest living in a far-flung community; but no suggestion of when, or why. Occasionally a Continental source records or speculates on the visit of a Gaulish saint or bishop to these islands or on their encounters with pagans and heresies, but not of how people moved around their landscape, how they grew old, tended their sick or brought up their children. Archaeology sometimes tells us where people lived and what they ate, show they constructed their houses; but it says frustratingly little about their relations or their identity. We must piece together these fragmentary sources and animate them. But if we cannot construct a narrative history, what can we say about the journey of the peoples of Britain between the last days of Rome and those of Bede or the Vikings?
This was an age when people believed that the material ruins of lost cultures—the walls and fountains, megalithic tombs and great earthworks, the aqueducts, henges and stone circles that populated their landscape and poetry and framed their psyches—had been built in a lost time by a race of giants. An Anglo-Saxon elegy called ‘the Ruin’, first written down, perhaps, in the eighth century, marvels at nature’s conquest of these great works. The poet writes:
Wondrous is this stone wall, wrecked by fate;
The city buildings crumble, the works of giants decay.
Roofs have caved in, towers collapsed,
Barren gates are broken, hoar frost clings to mortar,
Houses are gaping, tottering and fallen,
Undermined by age. The earth’s embrace,
Its fierce grip, holds the mighty craftsmen;
They are perished and gone. A hundred generations
have passed away since then. This wall, grey with lichen
and red of hue, outlives kingdom after kingdom,
Withstands tempests; its tall gate succumbed.
At The New Yorker, Rebecca Mead profiles Margaret Atwood, Canada’s prolific queen of literature. Mead and Atwood cover the resonance of The Handmaid’s Tale in Donald Trump’s America, Atwood’s approach to feminism, and the purpose of fiction today. Beloved for her incisive mind along with her works, Atwood uses unlimited curiosity as her approach to a life well-lived—whether that’s tenting while birding in Panama, engaging with her 1.5 million Twitter followers, or writing as a septuagenarian. “I don’t think she judges anything in advance as being beneath her, or beyond her, or outside her realm of interest,” says her friend and collaborator, Naomi Alderman.
Atwood has long been Canada’s most famous writer, and current events have polished the oracular sheen of her reputation. With the election of an American President whose campaign trafficked openly in the deprecation of women—and who, on his first working day in office, signed an executive order withdrawing federal funds from overseas women’s-health organizations that offer abortion services—the novel that Atwood dedicated to Mary Webster has reappeared on best-seller lists. “The Handmaid’s Tale” is also about to be serialized on television, in an adaptation, starring Elisabeth Moss, that will stream on Hulu. The timing could not be more fortuitous, though many people may wish that it were less so. In a photograph taken the day after the Inauguration, at the Women’s March on Washington, a protester held a sign bearing a slogan that spoke to the moment: “MAKE MARGARET ATWOOD FICTION AGAIN.”
Given that her works are a mainstay of women’s-studies curricula, and that she is clearly committed to women’s rights, Atwood’s resistance to a straightforward association with feminism can come as a surprise. But this wariness reflects her bent toward precision, and a scientific sensibility that was ingrained from childhood: Atwood wants the terms defined before she will state her position. Her feminism assumes women’s rights to be human rights, and is born of having been raised with a presumption of absolute equality between the sexes. “My problem was not that people wanted me to wear frilly pink dresses—it was that I wanted to wear frilly pink dresses, and my mother, being as she was, didn’t see any reason for that,” she said. Atwood’s early years in the forest endowed her with a sense of self-determination, and with a critical distance on codes of femininity—an ability to see those codes as cultural practices worthy of investigation, not as necessary conditions to be accepted unthinkingly. This capacity for quizzical scrutiny underlies much of her fiction: not accepting the world as it is permits Atwood to imagine the world as it might be.
A tiny Dik-dik is making a big impression at Chester Zoo. The little Antelope is being cared for by zoo staff after its mother passed away soon after giving birth.
Standing only about 8 inches tall at the shoulders, the tiny Kirk’s Dik-dik is being bottle fed by staff five times a day. He will continue to receive a helping hand until he is old enough to eat by himself.
Photo Credit: Chester Zoo
Assistant team manager Kim Wood and keeper Barbara Dreyer have both been caring for the new arrival, who is currently so light he doesn’t register a weight on the zoo’s set of antelope scales.
Kim said, “The youngster is beginning to find his feet now and is really starting to hold his own. We’re hopeful that, in a few months’ time, we’ll be able to introduce him to some of the other members of our group of Dik-diks. He may be tiny but he is certainly making a big impression on everyone at the zoo.”
Kirk’s Dik-diks grow to a maximum size of just 16 inches tall at the shoulders, making them one of the smallest species of Antelope in the world.
The species takes its name from Sir John Kirk, a 19th century Scottish naturalist, as well as the alarm calls made by female Dik-diks.
Kirk’s Dik-diks are native to northeastern Africa and conservationists say they mark their territory with fluid from glands between their toes and just under their eyes, not dissimilar to tears. Populations in the wild are stable.
The last Woolly Mammoth died on an island now called Wrangel, which broke from the mainland twelve thousand years ago. They inhabited it for at least eight millennia, slowly inbreeding themselves into extinction. Even as humans developed their civilizations, the mammoths remained, isolated but relatively safe. While the Akkadian king conquered Mesopotamia and the first settlements began at Troy, the final mammoth was still here on Earth, wandering an Arctic island alone.
The last female aurochs died of old age in the Jaktorów Forest in 1627. When the male perished the year before, its horn was hollowed, capped in gold, and used as a hunting bugle by the king of Poland.
The last pair of great auks had hidden themselves on a huge rock in the northern Atlantic. In 1844, a trio of Icelandic bounty hunters found them in a crag, incubating an egg. Two of the hunters strangled the adults to get to the egg, and the third accidentally crushed its shell under his boot.
Martha, the last known passenger pigeon, was pushing thirty when she died. She’d suffered a stroke a few years earlier, and visitors to her cage at the Cincinnati Zoo complained the bird never moved. It must have been strange for the older patrons to see her there on display like some exotic, since fifty years before, there were enough of her kind to eclipse the Ohio sun when they migrated past.
Incas, the final Carolina parakeet, died in the same Cincinnati cage that Martha did, four years after her. Because his long-term mate, Lady Jane, had died the year before, it was said the species fell extinct thanks to Incas’s broken heart.
When Booming Ben, the last heath hen, died on Martha’s Vineyard, they said he’d spent his last days crying out for a female that never came to him. The Vineyard Gazette dedicated an entire issue to his memory: “There is no survivor, there is no future, there is no life to be recreated in this form again. We are looking upon the uttermost finality which can be written, glimpsing the darkness which will not know another ray of light.”
Benjamin, the last thylacine—or Tasmanian tiger—perished in a cold snap in 1936. His handlers at the Beaumaris Zoo had forgotten to let him inside for the night and the striped marsupial froze to death.
The gastric brooding frog—which incubates eggs in its belly and then vomits its o spring into existence—was both discovered and declared extinct within the twelve years that actor Roger Moore played James Bond.
Turgi, the last Polynesian tree snail, died in a London zoo in 1996. According to the Los Angeles Times, “It moved at a rate of less than two feet a year, so it took a while for curators…to be sure it had stopped moving forever.”
The same year, two administrators of a Georgia convalescent center wrote the editor of the journal Nature, soliciting a name for an organism that marks the last of its kind. Among the suggestions were “terminarch,” “ender,” “relict,” “yatim,” and “lastline,” but the new word that stuck was “endling.” Of all the proposed names, it is the most diminutive (like “duckling” or “ ngerling”) and perhaps the most storied (like “End Times”). The little sound of it jingles like a newborn rattle, which makes it doubly sad.
While Nature’s readers were debating vocabulary, a research team in Spain was counting bucardos. A huge mountain ibex, the bucardo was once abundant in the Pyrenees. The eleventh Count of Foix wrote that more of his peasants wore bucardo hides than they did woven cloth; one winter, the count saw five hundred bucardos running down the frozen outcrops near his castle. The bucardo grew shyer over the centuries—which made trophy hunters adore it—and soon disappeared into the treacherous slopes for which it was so well designed.
Though a naturalist declared it hunted from existence at the turn of the twentieth century, a few dozen were spotted deep in the Ordesa Valley in the 1980s. Scientists set cage traps, which caught hundreds of smaller, nonendangered chamois. It was frustrating work, and bucardo numbers dwindled further as the humans searched on. By 1989, they’d trapped only one male and three females. In 1991, the male died and eight years later, the taxon’s endling, Celia, walked right into the researchers’ trap.
She was twelve when they shot her with a blow dart and tied white rags over her eyes to keep her calm. They fit her with a tracking collar and a pulse monitor and biopsied two sections of skin: at the left ear and the ank. Then Celia was released back into the wild to live out the rest of her days. Of the next ten months we know nothing; science cannot report what life was like for Earth’s final bucardo. But the Capra pyrenaica before her had, probably since the late Pleistocene, moved through the seasons in sex-sorted packs. In the female groups, a bucarda of Celia’s age would serve as leader. When they grazed in vulnerable spaces, she’d herd her sisters up the tricky mountain shelves at the rst sign of danger, up and up until the group stood on cli s that were practically vertical. Celia, however, climbed to protect only herself that final winter—and for at least three winters before that, if not for most winters in her rocky life.
It is dangerous to assume that an endling is conscious of its singular status. Wondering if she felt guilty, or felt the universe owed her something—that isn’t just silly; it’s harmful. As is imagining a bucardo standing alone on a vertical cliff , suppressing thoughts of suicide. As is assuming her thoughts turned to whatever the mountain ungulate’s version of prayer might be. Or hoping that, in her life, she felt a fearlessness impossible for those of us that must care for others.
The safe thought is that Celia lived the life she’d been given without any sense of finality. She climbed high up Monte Perdido to graze alone each summer, and hobbled down into the valley by herself before the winters grew too frigid. She ate and groomed and slept, walked deep into the woods, and endured her useless estrus just as she was programmed to do—nothing further.
But then again, a worker ant forever isolated from its colony will walk ceaselessly, refusing to digest food, and a starling will suffer cell death when it has no fellow creature to keep it company. A dying cross spider builds a nest for her o spring even though she’ll never meet them, and a pea aphid will explode itself in the face of a predator, saving its kin. An English-speaking gray parrot once considered his life enough to ask what color he was, and a gorilla used his hands to tell humans the story of how he became an orphan. Not to mention the countless jellyfish that, while floating in the warm seas, have looked to the heavens for guidance.
Though problematic, it’s still easy to call these things representative of what unites our kingdom: we are all hardwired to live for the future. Breeding, dancing, nesting, the night watch—it’s all in service to what comes later. On a cellular level, we seem programmed to work for a future which doesn’t concern us exactly, but that rather involves something that resembles us. We all walk through the woods, our bodies rushing at the atomic level toward the idea that something is next. But is there space in a creature’s DNA to consider the prospect of no next? That one day, nothing that’s us—beyond ourselves—will exist, despite the world that still spins all around us?
Six days into the new millennium, Celia’s collar transmitted the “mortality” beep. A natural death—crushed by a falling tree limb, her neck broken and one horn snapped like a twig. In a photo taken by the humans that fetched her, she seems to have been nestled on her haunches, asleep. They sent Celia to a local taxidermist and then turned to the cells they’d biopsied. After a year spent swimming in liquid nitrogen at 321 degrees below zero, the cells were primed to divide. The Los Angeles Times ran a long article about what might happen next, quoting an environmentalist who warned, “We don’t have the necessary humility in science.”
At the lab, technicians matched a skin cell from Celia with a domestic goat’s egg cell. The goat-egg’s nucleus was removed, and Celia’s nucleus put in its place. Nearly all the DNA of any cell lives inside its nucleus, so this transfer was like putting a perfect Celia curio into the frame of a barnyard goat.
After a mammal’s egg cell is enucleated, it is common for nothing to happen. But sometimes, the reconstructed cell reprograms itself. Thanks to a magic humans don’t totally grasp, the nucleus decides it is now an egg nucleus and then replicates not as skin, but as pluripotent, able to split into skin cells, blood cells, bone cells, muscle cells, nerve cells, cells of the lung.
While this DNA technology evolved, the Celia team cultivated an odd harem of hybrid surrogates—domestic goats mated with the last female bucardos. They had hybrid wombs that the scientists prayed would accept the reconstructed and dividing eggs. In 2003, they placed 154 cloned embryos—Celia in a goat eggshell—into 44 hybrids. Seven of the hybrids were successfully impregnated, and of those seven, just one animal carried a zygote to term. The kid was born July 30, 2003, to a trio of mothers: hybrid womb, goat egg, and magical bucardo nucleus. Genetically speaking, however, the creature was entirely Capra pyrenaica. And so, thirteen hundred days after the tree fell on Celia, her taxon was no longer extinct—for about seven minutes.
The necropsy photos of the bucardo kid are strangely similar to those of Yuka, the juvenile mammoth found frozen in permafrost with wool still clinging to her body. Wet, strangely cute, and lying stretched out on her side, the newborn looks somehow time- less. Her legs seem strong and kinetic, as if she were ready to jump up and run. All of her systems were apparently functional, save her tiny lungs.
In her hybrid mother’s womb, the clone’s lung cells mistakenly built an awful extra lobe, which lodged in her brand-new throat. The kid was born struggling for air and soon died of self-strangulation. Lungs seem the trickiest parts to clone from a mammal; they’re what killed Dolly the sheep as well. How fitting that the most difficult nature to re-create in a lab is the breath of life.
The term we now use for the procedure of un-ending an endling has been around for decades, though it was rarely used. “De-extinction” first appeared in a 1979 fantasy novel, after a future-world magician conjures domestic cats back from obscurity. But when the Celia team reported their findings to the journal Theriogenology, they didn’t use the word. A few scientifc papers in fields ranging from cosmology to paleobiology check the name, but it was left almost entirely to science fiction until a dozen years postbucardo. A MacArthur Fellow chided the term’s clunkiness, calling it “painful to write down, much less to say out loud.” But eventually, the buzzword stuck.
“De-extinction” made its popular debut in 2013, in a National Geographic article. To celebrate the coming-out of the term—and the new ways it would allow humans to mark animal lives—the magazine held a conference at their headquarters with lectures organized into four categories: Who, How, Why/Why Not, and Wild Again. Among the How speakers was Ordesa National Wild- life Park’s wildlife director, who recounted the Celia saga. The four-syllable term tangled with the director’s Castilian accent, but people still applauded when he called Celia’s kid “the first ez-tinc-de-tion.” As the audience clapped, the director bowed his head, obviously nervous. Behind him was a projected image of the cloned baby, fresh from her hybrid mother and gagging in the director’s latexed hands. The clone’s tongue lolled out the side of her mouth.
Earlier that morning, an Australian paleontologist confessed his lifelong obsession with thylacines, despite being born nine years after the Tasmanian tiger’s demise. “We killed these things,” he said to the audience. “We shot every one that we saw. We slaughtered them. I think we have a moral obligation to see what we can do about it.” He then explained how he’d detected DNA fragments in the teeth of museum specimens. He vowed to first find the technology to extract the genetic code from the thylacine tooth-scraps, then to rebuild the fragments to make an intact nucleus, and finally to find a viable host womb where a Tasmanian tiger’s egg could incubate—in a Tasmanian devil, perhaps.
The man’s research group, called the Lazarus Project, had just announced their successful cloning of gastric brooding frog cells. The fact that the cells only divided for a few days and then died would not deter his enthusiasm. “Watch this space,” he said. “I think we’re gonna have this frog hopping glad to be alive in the world again.”
Later in the conference, a young researcher from Santa Cruz outlined a plan that allowed humans to “get to witness the passenger pigeon rediscover itself.” But after de-extinction, he said, the birds would still need flying lessons. So why not train homing pigeons to fly passenger routes? To convince the passenger babies they were following their own kind, the young scientist suggested coating the homers with blue and scarlet cosmetic dyes.
That afternoon, the chair of the Megafauna Foundation mentioned how medieval tales and even the thirty-thousand-year-old paintings in Chauvet Cave would help prepare Europe for the herds of aurochs he hoped to resurrect. The head of the conference’s steering committee sounded almost wistful when he concluded at the end of his speech, “Some species that we killed off totally, we could consider bringing back to a world that misses them.” And a Harvard geneticist hinted that mouse DNA could be jiggered to keep the incisors growing from the jawline until they protruded, tusk-like, from the mouth. This DNA patchworking could help fill a gap in our spotty rebuild of the mammoth genome, he said.
Shortly after that talk, a rare naysayer—a conservation biologist from Rutgers—addressed the group: “At this very moment, brave conservationists are risking their lives to protect dwindling groups of existing African elephants from heavily armed poachers, and here we are in this safe auditorium, talking about bringing back the woolly mammoth; think about it.”
But what exactly is there to think about? What can thinking do for us, really, at a moment like this one? We’re knee-deep in the Holocene die-off, slogging through neologisms that remind us what is left. These speeches—of extravagant plans, of Herculean pipe dreams, and of missing—are more than thought; they admit to a spot on our own genome. Perhaps we’ve always held, with submicroscopic scruples, the fact of this as our next. The first time a forged tool sliced a beast up the back was the core of this lonely cell, and then that cell set to split, and now each scientist—onstage and dreaming—is a solitary cry of this atomic, thoughtless fate.
To dispatch animals, then to miss them. To forget their power and use our own cockeyed brawn to rebuild something unreal from the scraps. Each speech, at this very moment, is a little aria of human understanding, but it’s the kind of knowledge that rests on its haunches in places far beyond thought.
And at that very moment, the last Rabbs’ fringe-limbed tree frog was dodging his keepers at a biosecure lab in Atlanta. Nicknamed Toughie, the endling hadn’t made a noise in over seven years.
And at that very moment, old Nola and Angalifu, two of the six remaining northern white rhinos, stood in the dirt of the Safari Park at the San Diego Zoo with less than twenty-four months to live. Their keepers had already taken Angalifu’s sperm and would do the same for Nola’s eggs, housing the samples in a lab that had already cataloged cells from ten thousand species. It was a growing trend—this new kind of ark, menagerie, or book of beasts—and it carried a new term for itself: the “frozen zoo.”
The planet’s other northern whites, horns shaved down for their own protection, roamed Kenya’s Ol Pejeta Conservancy under constant armed watch. And Celia’s famous cells were buzzing in their cryogenic state, far from Monte Perdido, still waiting for whatever might come next.
And at that very moment, way up in northwest Siberia, a forward-thinking Russian was clearing a space to save the world. As the permafrost melted, he said, it would eventually release catastrophic amounts of surface carbon into the atmosphere. To keep the harmful gases in the rock-hard earth, the Russian and his team wanted to turn the tundra back into the mammoth steppe: restoring grassland and reintroducing ancient megafauna that would stomp the dirt, tend the grass, and let the winter snows seep lower to cool the deep land. The reintroduced beasts, he swore, would send the tundra back in time.
He proposed that for every square kilometer of land there be “five bison, eight horses, and fifteen reindeer,” all of which had already been transported to his “Pleistocene Park.” Here was a space where earlier versions of all these beasts had lived in the tens of thousands of years prior. Eventually, once the science caught up, he would bring one elephant-mammoth hybrid per square kilometer, too.
And so here is a picture of next: some model of gargantuan truck following the Kolyma River—rolling over the open land where mammoths once ran for hundreds of miles. Like a growing many of us, the Russian sees the moment in which that truck’s cargo door opens and a creature—not quite Yuka but certainly not elephant—lumbers out into the grass. Her first steps would be less than five hundred miles as the crow fiies, out and out over the Arctic, from the island where the last living mammoth fell into the earth 3,600 years ago.
The Russian’s process—making new beasts to tread on the bones of what are not quite their ancestors—has a fresh label for itself, as everything about this world is new. The sound of this just-coined word, when thrown by a human voice into a safe auditorium, carries with it the hope of a do-over, and the thrust of natural danger.
That new word is re-wilding.
Excerpted from Animals Strike Curious Posesby Elena Passarello. Copyright Elena Passarello, published with permission of Sarabande Books, Inc., 2017.
Walk into the offices of Memac Ogilvy Advize, an advertising firm on the third floor of a car rental building in a business district of West Amman, Jordan, and you’ll be greeted with an immense black-and-white photo of Donald Trump’s face. The red cursive text printed across it reads: “We Trumped the awards.”
The sign sits behind a reception counter boasting a large trophy won at the Dubai Lynx 2017, an annual advertising competition where Memac Ogilvy won the Grand Prix for PR (a first for any Jordanian agency) along with four other silver and gold prizes, for trolling Trump in their ads on behalf of Royal Jordanian Airlines.
Royal Jordanian, the once-obscure national carrier of the small desert kingdom of Jordan, has been making a name for itself as the Middle Eastern airline that dared to mock Trump. Its first ad—the one that won Memac Ogilvy all those awards—was a simple image published on Twitter, Facebook, and Instagram on November 8, 2016, just before the U.S. election results were announced. The ad showed economy and business class prices for RJ flights to Chicago, Detroit, and New York, with a few snarky lines above: “Just in case he wins… Travel to the U.S. while you’re still allowed to!”
That ad went viral, with an organic reach of 450 million, an 80.3 percent positive response, and press coverage on 26 official news sites and blogs, according to Memac Ogilvy.
The campaign summary that the agency later submitted for the advertising competition was simple. Objective? “Sell more plane tickets to the U.S.” Method? “Use Donald Trump’s ban of Muslims to our advantage.” Campaign budget? “Zero USD.”
According to a video of the award acceptance on the ad agency’s Facebook page, RJ bookings to the U.S. increased by 50 percent after the campaign. “We tweeted more than Trump himself,” the voiceover narration noted.
In early February, when American judges blocked Trump’s ban on immigration from seven Muslim-majority countries, RJ stepped into the conversation once again, releasing an ad with the word “Ban” scratched out and changed to “Bon Voyage!” Flights to New York, Chicago, and Detroit were on sale, the image announced, urging customers, “Fly to the U.S. with RJ now that you’re allowed to.” The sentiment was especially pertinent for Jordan, which hosts millions of refugees and displaced people from countries like Syria, Iraq, Sudan, Somalia and Yemen—all of which had been included in Trump’s executive order.
On March 22, when the U.S. Department of Homeland Security announced that electronics “larger than a smartphone” would be banned as carry-on items for flights departing from 10 Middle Eastern and North African airports, including Amman’s Queen Alia International Airport, RJ spoke up again. The airline responded to the #electronicsban, first with a poem beginning with “Every week a new ban,” then with a list of “12 things to do on a 12-hour flight with no laptop or tablet,” and finally with a recommendation to “Do what we Jordanians do best… Stare at each other!” The posts have already garnered tens of thousands of shares as well as international media coverage lauding RJ’s response to the laptop ban as “snarky,” “cheeky” and “perfect.”
The mastermind behind all these campaigns is Hadi Alaeddin, a 31-year-old Palestinian-Jordanian art director at Memac Ogilvy who also designs posters and T-shirts on the side. He co-runs a branding service called Warsheh and does some work with Jo Bedu, a popular Jordanian brand known for its tongue-in-cheek T-shirts and arabizi (Arabic/English) puns.
When I met Alaeddin, together with Khalil Atieh and Aya Nasif, two Jordanian millennials who handle client services for the Royal Jordanian campaign, they told me that the Trump ads were a spontaneous idea.
“It was real-time, on the spot. We didn’t have a strategy,” Nasif said. “Hadi just thought of it.”
Eight months ago, Royal Jordanian approached Memac Ogilvy seeking help with an image refresh. Founded by the late Jordanian King Hussein bin Talal in 1963, the national carrier had been faithfully flying from the Levant across the region for decades, but developed a somewhat stuffy, overly institutional feel in the process. “RJ wanted to be a ‘fun brand,’” Nasif explained, especially as it struggled to compete with lavishly funded Gulf airlines like Emirates and Qatar Airways. While oil-rich competitors can afford to produce Jennifer Aniston commercials and to paint planes with roses for Valentine’s Day, Jordan’s airline has a far smaller budget. So, it decided to play to its strength: humor—specifically, Jordanian humor, with just the right touch of sarcasm.
“Jordan can definitely compete with the Gulf,” Alaeddin said, noting that his country has a local way of being, a shared mentality rooted in Levantine culture, which an international metropolis like Dubai may lack. “We have our own humor, culture, and way of talking. We have our personality. That’s our edge.”
The point of RJ’s Trump ads is not to be political, Alaeddin told me, but to personalize the brand, so that people can relate to the airline as they would to a Jordanian uncle, not to some cold, inaccessible business.
“It’s never about politics. The whole world was following the election. People don’t follow the election for politics. It’s just what everyone is talking about,” Alaeddin said. So when everyone was discussing Trump’s comments about banning Muslims, the ad agency joined in, the way a Jordanian uncle would. “We just made a silly joke. And people responded to it.”
Within an hour of the first Trump ad, Alaeddin’s phone battery had died from all the notifications that kept popping up. People across the world were responding, even those with no ties to the region or plans to fly Royal Jordanian. “Even non-Muslims globally saw that the sweeping statement was ridiculous for a president-to-be,” Atieh said. “They said things like, ‘Thank you for making fun of this,’ especially since the joke was coming from the people who are most affected.”
I watched from inside the Memac Ogilvy office as the team released their most popular recent ad—the “12 things to do on a 12-hour flight” ad—including the pointed number 12: “Think of reasons why you don’t have a laptop or tablet with you.” I wondered: Was that supposed to imply that the ban is nonsensical? (DHS appeared to be concerned about potential explosives hidden in electronics following intelligence reports, and thesimilar British ban seems to be based on this intelligence as well.) Why didn’t the ad directly name Trump, or contain an explicit message about security threats?
“We’re kind of passive-aggressive,” Alaeddin laughed, adding that RJ did publish an official release about the details of the ban, but that talking about terror is a poor marketing strategy. “Our job is not to tell people there’s a ban. People already know. Our job is to communicate in a lighthearted way that, yeah, this is happening, but here’s our point of view.”
Royal Jordanian doesn’t want to lose business, but they also can’t force American policy to change. So they’ll do the next best thing, which is to affirm the overwhelming response from their customers and from residents of Jordan writ large: “It’s annoying!” Nasif said. “All of us travel. I can only imagine how it would have been when I was in university, having to worry about this whenever I flew home from the States.”
“We know it’s annoying and we’re trying our best to accommodate,” Alaeddin said, adding that they’d given RJ some ideas about ways to enhance on-board entertainment without laptops. The comments on RJ’s Facebook ads are filled with requests for free inflight Wi-Fi, for example. If Jordanians can’t impact the American president’s decisions, at least they can feel that their airline cares what they think, and then support RJ in return.
After all, that’s what made the first Trump ad so successful. “It was a lighthearted response to a somewhat hateful claim,” Alaeddin said. “More importantly, it sold.”
Oh man, even reading this made me tear up. I probably watch the whole of Buffy annually but I almost always skip this episode because it's too tough.
Twenty years ago, when Buffy the Vampire Slayerdebuted on The WB, it’s hard to imagine anyone conceiving of what a phenomenon the show would become—a hit TV series, yes, but also a platform through which pop-culture theorists and sociologists alike could consider the dynamics of American teenaged life around the turn of the millennium. “Buffy Studies” have become a thriving faction within academia, while streaming services have kept the show on the contemporary cultural radar. In a media landscape where people continue to be surprised that teenaged girls engage with political issues, the idea of a perky blonde cheerleader being tasked with saving the world is still strikingly subversive.
One of the most revolutionary things Buffy did, though, was take teenagers—and their pain—seriously. The show’s central conceit of having Buffy (Sarah Michelle Gellar) be a “chosen one” destined to protect the earth from vampires and monsters offered a fairly typical setup for a supernatural drama, but the twist was that many of Buffy’s boogeymen were based on real sources of teenage angst. Here were metaphorical demons made literal: a controlling mother who lives vicariously through her daughter, a friend whose behavior becomes unrecognizable when he joins a pack of popular kids, a classmate so lonely and isolated she becomes literally invisible. By having intangible issues manifest themselves as physical monsters, Buffy made them accessible, and manageable.
As the show progressed, its creator, Joss Whedon, experimented more with the boundaries of what Buffy could do. In season four, concerned that the show’s great strength—its dialogue— was overriding its other qualities, he wrote an episode, “Hush,” in which every single character lost their voice. In season six, he conceived a musical episode in which the characters found themselves spontaneously bursting into song. And in season five, he wrote what many critics and fans consider to be Buffy’s superlative moment: an episode in which a central character died from natural causes, and which showed everyone else experiencing the shock, anger, and grief of her death in real time.
“The Body,” which aired on February 27, 2001, opened by repeating a scene that had concluded the previous episode. Buffy returns home, walking through her front door, commenting on some flowers sitting on the hall table, and calling out to her mother, who’s lying on the couch. When Buffy sees that her mother is motionless, she pauses, and her expression shifts. “Mom?” she says. She repeats the word, more urgently. Then, quieter: “Mommy?”
For Whedon, the episode was born out of a desire to explore and expose the real emotional responses to losing a loved one. On television, death is usually cheap: a device that precipitates a narrative arc, or writes out an actor who wants to leave, or functions as a ploy for ratings. Although some of these factors were involved in “The Body”—the death of Buffy’s mother, Joyce (Kristine Sutherland) marked a turning point for Buffy’s maturity into adulthood—Whedon wantedto focus not on the pain and catharsis of grief, but on how surreal and physically strange it can be. “What I really wanted to capture,” he later explained in the DVD commentary, “was the extreme physicality, the extreme—the almost boredom of the very first few hours.”
The first 13 minutes of the episode are set entirely in Buffy’s home. There’s a flashback to Thanksgiving, and then a swift and jarring cut to the present, where Buffy notices her mother’s body, calls 911, attempts to administer CPR, and responds to the paramedics. Joyce’s body—clearly and noticeably dead from an audience perspective—is visibly jarring, shown in close-up to have a grey pallor and an expression of shock. The title of the episode captures the impact of the physical presence in the living room, and the strangeness of Joyce’s sudden metamorphosis from a living entity into a dead one. Buffy, noticing that Joyce’s skirt has hiked up around her waist during her attempts to administer CPR, pulls it down before the paramedics arrive, conveying how wrong and obscene her mother’s death seems.
The sheer weirdness of this transmogrification from Joyce to “the body” is a theme throughout the episode. “She’s cold,” Buffy tells the dispatcher on the phone. “The body is cold?” the woman asks. “My mom is cold,” Buffy responds. Later, when Giles (Anthony Stewart Head) arrives and attempts to revive Joyce, not knowing the paramedics have already declared her dead, Buffy shouts, “We’re not supposed to move the body!” before clamping a hand to her mouth in shock, horrified at the consequences of what she’s just said. Stylistically, too, Whedon employs directorial devices to convey the disorientation Buffy feels. When a paramedic talks to her, the camera focuses on his chest below his head, and when Buffy picks up the phone, the buttons seem huge and distorted in perspective. There’s no music throughout the episode, with the few sounds that can be heard (children playing, wind chimes) marking the contrast between Buffy’s absurd reality and the world outside.
The peculiarity of the rest of Sunnydale carrying on as usual while the world of Buffy and her friends has upended is echoed later, when Xander (Nicholas Brendon) double parks his car going to pick up Willow (Alyson Hannigan), and duly receives a parking ticket at the end of the scene. Buffy’s friends, meanwhile, represent different responses to grief. Xander is angry, raging at the doctors and punching a hole in the wall. Willow is panicky and indecisive, obsessing over small details like her outfit rather than confronting the real source of her pain. Tara (Amber Benson) is supportive and accepting, with it emerging later that she’s experienced this before. And Anya (Emma Caulfield), a former demon who only recently became human, communicates a childlike response to Joyce’s death, asking irritating questions repeatedly and mimicking the behavior she observes from others. (When she clasps Giles in an awkward bear hug, it’s one of the few elements of humor in the episode.) Her monologue, after being told it’s not OK to ask such blunt questions, is one of the most moving moments in “The Body”:
But I don’t understand! I don’t understand how all this happens. How we go through this. I mean, I knew her and then she’s—there’s just a body, and I don’t understand why she just can’t get back in it and not be dead anymore. It’s stupid. It’s mortal and stupid. And Xander’s crying and not talking, and I was having fruit punch, and I thought, Joyce will never have any more fruit punch, ever, and she’ll never have eggs, or yawn, or brush her hair, and no one will explain to me why!
But it’s the scenes with Dawn (Michelle Trachtenberg), Buffy’s younger sister, that put the episode in context with everything Buffy has done over the past four seasons. When Dawn first appears in the episode, she’s crying in a bathroom at school because a classmate has spread rumors that she self-harms, and everyone thinks she’s a freak. This moment of pain—real for Dawn, but also surmountable, since she soon wipes her eyes and walks out of the bathroom head held high—is juxtaposed withDawn’s reaction when Buffy tells her that their mother is dead. The scene is observed externally by the audience, through a glass wall, and most of the words are imperceptible. But Dawn’s reaction, as she breaks into sobs and physically crumples to the ground, is unmistakable. Buffy, who’s spent the first half of the episode in vivid close-up, is seen from a distance, too, shifting roles from suffering impossibly to comforting her sister.
In this scene, Buffy draws an intriguing line. Throughout previous episodes, the show has told viewers that their experiences and their heartbreaks growing up have been real and powerful, from being bullied to being dumped. “The Body” manages to communicate, though, that there are a few certain life moments that change you irrevocably, and it does so without undermining the significance of other things that feel life-shattering in the moment. It presents a realistic experience of grief, reaching out to viewers who’ve endured it, and making it more comprehensible for those who haven’t. In tackling the less familiar aspects of death—boredom, shock, awkwardness—it offers a more truthful depiction of bereavement.
“The Body” is one of the most sophisticated analyses of the impact of death ever produced on television, and it remains a testament to why Buffy was so unique. “There’s things,” Tara tells Buffy about her own mother dying when she was 17. “Thoughts and reactions I had that I couldn’t understand. Or even try to explain to anybody else.” The power of Buffy is that it understands those thoughts, and does try to explain them, all in the guise of being a teen drama about vampires.
Readers of “The Gulag Archipelago” by Aleksandr I. Solzhenitsyn might remember how the book starts: “In 1949 some friends and I came upon a noteworthy news item in Nature, a magazine of the Academy of Sciences. It reported in tiny type that in the course of excavations on the Kolyma River a subterranean ice lens had been discovered which was actually a frozen stream - and in it were found frozen specimens of prehistoric fauna some tens of thousands of years old” (p. ix). That very same news item in Nature (‘Priroda’) continued by reporting what Solzhenitsyn did not: that in May 1946 unnamed prisoners of GULAG recovered a nest with three complete mummified carcasses of arctic ground squirrels at a depth of 12.5 meters of the permafrost sediments of the El’ga river (the upper Indigirka river basin, Yakutia). The carcasses were extremely well preserved and “smelled of dampness immediately upon their recovery but lost the smell after having air-dried and remained in a stable condition resembling that of the mummies” (p. 76). It was suggested that they had lain in the permafrost for at least 10–12 thousand years.
It turned out to be even longer, 30,000 years, according to new radiocarbon dating.
The paper goes on to describe how the team, led by Nikolai Formozov at Lomonosov Moscow State University, sequenced DNA from one of the mummified carcasses, along with DNA from four fossils found recently in Siberian permafrost and modern day Arctic ground squirrels. It was all a long time coming. Formozov first suggested studying the squirrels 25 years ago—a small blip of time when you’re talking about creatures that have laid in the ground for millennia but still, a long time in the span of a single scientist’s career.
* * *
The hard part was not, actually, getting access to the famous specimens, whose modern provenance is well documented. The unknown prisoners had turned the squirrels over to the Gulag camp’s geologist, Yuriy Popov, who gave two of them to the Zoological Museum in what is now St. Petersburg. The third, preserved in alcohol, is at the Magadan Regional Museum of Local Lore, closer to the camp where it was originally found.
Years ago, Formozov and his colleagues took some mummified squirrel tissue and extracted their DNA. They did it three or four times, only to end up with three or four different results. The problem, it turned out, was their lab, which was contaminated with DNA from modern day ground squirrels they also studied. The results always matched whatever modern squirrel species happened to be in the lab recently.
Ancient DNA is tricky like this. Over time, long coiled strands of DNA fragment into shorter pieces. The older, the more fragmented, leaving few intact strands for the scientists to sequence. Meanwhile, the cells of present-day ground squirrels are bursting with fresh DNA that can easily drown out the bits of intact ancient DNA. “It is impossible to study ancient and modern DNA of the same group at the same time in a lab,” Formozov wrote to me in an email. “So we stopped.”
Then, a reunion with an old university classmate, Marina Faerman, set things back on course. At a mutual friend’s apartment in 2010, Formozov remet Faerman, who like many Russians of Jewish descent, had moved to Israel in the 1980s. She now had a lab at the Hebrew University of Jerusalem studying ancient human DNA, which meant 1) she was an expert in handling ancient DNA and 2) her lab was uncontaminated by modern squirrels. “It took about a minute to convince her to take part in this project,” said Formozov.
Around this time, Formozov remembered the preface to The Gulag Archipelago, which he, like many in the then Soviet Union, vividly recalls reading. Alexander Solzhenitsyn’s book was the first historical account of what happened in the Gulag camps, and it circulated in samizdat in the Soviet Union. Formozov’s family owned the first volume, he says, smuggled in from the west. The second volume was passed among friends, who had it for a few days at a time. The third volume was “extremely rare,” and he recalls having only three hours to read it in a friend’s home. If he could find the scientific paper referenced by Solzhenitsyn, Formozov thought, perhaps he could learn more about the origin of these mummified squirrels. In half an hour of scanning old copies of the journal Priroda, he found the original note by Yuriy Popov, the Gulag camp geologist.*
For Formozov, this wasn’t just an interesting bit of trivia, but the meeting of two long-time interests. About twenty-five years earlier, he learned that two of his teachers had taken part in the bloody uprising at Kengir, another Soviet labor camp, that is described in detail in The Gulag Archipelago. He became interested in the Kengir uprising and helped organize international conferences about Gulag uprisings in the early 1990s. It is with the weight of all this history in mind that he took to studying the squirrels originally found by Gulag prisoners.
* * *
The team ultimately sequenced one of the mummified squirrels found near the camp as long with four mummified squirrels more recently pulled out of the ground in Siberia. Because ancient DNA is so degraded, they sequenced only one gene. Yet that was enough to sketch out the relationships between these ancient Arctic ground squirrels and present day ones. The Ice Age was not a period of constant cold, but a period of fluctuations, when it warmed and cooled. Where the squirrels could live changed, too. So the squirrels that once lived in eastern Russia had actually gone extinct, the DNA analysis suggested. The squirrels that live in Siberia today are descendants of a later group that crossed the Bering land bridge from Alaska to colonize the area. The only known exception are squirrels on the Kamchatka peninsula that juts out of Russia’s east coast, which appear to be descended from the Ice Age squirrels. (Formozov also wrote me a magnificent email about how they collected the Kamchatka squirrels. It involves a fox.)
“I love this paper. It’s really cool,” said Tim Rabanus-Wallace, who works with ancient DNA at the University of Adelaide. “The science behind it is very solid.” At least as solid as any ancient DNA work can be, as Rabanus-Wallace knows, because he has done preliminary work sequencing ancient Arctic ground squirrels in the Yukon in Canada.
Here might be a good place to dwell on the fact that these ancient squirrels nests are also found in North America—often, in fact. The region comprising Siberia and western North America is known as Beringia, and it was of course once a contiguous region when the Bering land bridge existed. In the Yukon today, gold miners shoot water cannons to melt permafrost, which occasionally exposes the nests that ancient Arctic ground squirrels made to hibernate through winter.** And even more occasionally, some of those squirrels never made it out of hibernation. Bad for the squirrel, but very good for Arctic ground squirrel researchers because underground lockers are the safest place to store a fossil. “They make the perfect kind of fossils,” said Grant Zazula, a paleontologist with the government of Yukon.
Ancient Arctic ground squirrels are also interesting, scientifically, because these little creatures build nests. In other words, they go out and do the work of collecting Ice Age flora and saving it in an underground deep freeze for you. “They’re quite a neat little record,” said Zazula. He went on, “Most paleontologists who study the Ice Age, especially men, study these big wooly mammoths and saber-toothed tigers. When I’m at the conferences, I get up and say I study ground squirrels. But that’s okay.”
Sequencing ancient Arctic ground squirrels in North America would help confirm the findings in this study. Zazula and Formozov both say they’re open to collaboration, and they’ve corresponded with each other in the past. It’s gotten harder to collaborate with scientists in Russia recently, says Zazula, given the political winds. But as the shared lineage of Arctic ground squirrels reminds us, it’s really just a short hop across the Bering Sea.
* The opening of The Gulag Archipelago goes on to describe what Popov reported, that the prisoners who found the prehistoric fauna in ice “immediately broke open the ice encasing the specimens and devoured them with relish on the spot.” In an article in the Russian literary journal Novy Mir, Formozov wrote that this was unlikely, given that animals encased in permafrost lose moisture over time and become hard like wood. It’s likely those creatures were just fish stuck in a frozen river. But did he consider, I asked, even briefly when first reading the account that specimens might have been eaten? “Who knows what truly hungry people are capable of eating or trying to eat,” he replied, “The year 1946 was a very hungry one in Kolyma [the Gulag camp]. So we should be very thankful to those unnamed prisoners who saved those carcasses because they found the time and strength to do it in those terrible conditions.”
** You might be wondering how these little creatures dug nests in the permafrost, and the answer is they didn’t. The Ice Age was colder, which also meant there was less moisture around. With less moisture, there wasn’t enough liquid water in the ground to freeze into permafrost. These areas only later became permafrost, which had the effect of preserving the nests.
You could be forgiven for forgetting the National Day of Patriotic Devotion—technically, it happened before it was ever declared. Donald Trump established it with a stroke of a pen sometime after his inauguration; the official proclamation appeared Monday in the Federal Register.
That bit isn’t all that unusual. Presidents christen National Days Of Things all the time. President Barack Obama, for example, proclaimed the day of his own inauguration in 2009 a “National Day of Renewal and Reconciliation,” calling “upon all of our citizens to serve one another and the common purpose of remaking this Nation for our new century.” He annually declared September 11 to be “Patriot Day.” But “Patriotic Devotion” strikes a different note—flowery, vaguely compulsory.
Here’s the proclamation:
A new national pride stirs the American soul and inspires the American heart. We are one people, united by a common destiny and a shared purpose.
Freedom is the birthright of all Americans, and to preserve that freedom we must maintain faith in our sacred values and heritage.
Our Constitution is written on parchment, but it lives in the hearts of the American people. There is no freedom where the people do not believe in it; no law where the people do not follow it; and no peace where the people do not pray for it.
There are no greater people than the American citizenry, and as long as we believe in ourselves, and our country, there is nothing we cannot accomplish.
NOW, THEREFORE, I, DONALD J. TRUMP, President of the United States of America, by virtue of the authority vested in me by the Constitution and the laws of the United States, do hereby proclaim January 20, 2017, as National Day of Patriotic Devotion, in order to strengthen our bonds to each other and to our country -- and to renew the duties of Government to the people.
It’s hard to imagine Trump tweeting: “A new national pride stirs the American soul” in one of his tweets. It sounds more like the language of his inaugural address, which the administration has insisted Trump wrote himself, but which was reportedly actually written by former Breitbart News head Steve Bannon and senior adviser Stephen Miller.
The proclamation speaks of the need to “strengthen our bonds to each other and to our country.” That parallels this section from his speech on Friday:
At the bedrock of our politics will be a total allegiance to the United States of America, and through our loyalty to our country, we will rediscover our loyalty to each other. When you open your heart to patriotism, there is no room for prejudice.
Presidential proclamations can go over the top. Those who felt a shiver of discomfort reading Trump’s florid prose should remember that Bill Clinton sounded similarly grandiose when declaring, say, Irish-American Heritage Month. (Barack Obama, on the other hand, kept his proclamations relatively conversational.)
But it’s not the words that are jarring. It’s the sentiment, which seems out of step with what the president himself usually says. Trump is an individualist; his books and speeches largely center on the ability of one person—often him—to do stuff. When he talks about America as a whole, it’s usually in the frame of his “movement,” which of course reflects back on him.
The proclamation instead focuses on America as a national community —and sounds much more like Bannon. He’s said that America is more than an economy, it’s a “civic society”—implying that the number of successful immigrants in Silicon Valley poses a threat to that. When the proclamation declares that Americans must “maintain faith in our sacred values and heritage,” it’s hard to tell whether it’s a banal sentiment, or an echo of the more troubling rumbles of white-identity politics.
And like his inaugural speech, Trump’s proclamation is devoid of history. He does offer a mention of the Constitution. But while Obama often tried to broaden what it means to be an American by weaving the traditional veneration of the Founding Fathers together with other moments of American triumph, such as the civil-rights movement, Trump leaves what “patriotism” means up to interpretation. That, and the fact that the proclamation was timed to his own inauguration, led critics to suggest it means devotion to Trump himself, the defender of those “sacred values and heritage,” the rightful successor to a president whose legitimacy he never really stopped questioning.
“Patriotic Devotion” is itself a phrase with a decidedly martial ring. The last president to declare a Day of Patriotic Devotion was Woodrow Wilson, marking the enactment of a draft for World War One. In 1943, when Congress attempted to establish December 7 as a day to recognize the “patriotic devotion” of members of the armed forces, FDR vetoed the bill. “I think that a more suitable date can be selected for this purpose,” he wrote.
The president’s critics felt much the same about his decision to declare his own inauguration a holiday. If, as the proclamation declared, its intent was “ to strengthen our bonds to each other,” it ended up having the opposite effect—serving as one more point of division.
If there is no doubting the good that humanitarian groups have done in the last half century, there is also no doubting that they could have done much more. Effectiveness has rarely been their lodestar, and one reason why is that the sector has lacked a stable reference point against which to judge interventions, other than the status quo. That is, in order to be considered worthwhile, a humanitarian act only had to bear the burden of making a marginal improvement in the condition of those it sought to help.
In the U.S., the lack of a more ambitious threshold made a certain kind of sense. For one, the nation’s charitable tradition has long celebrated voluntary acts of giving as expressions of individual preferences and passions, and has historically resisted any constraints, regulatory or conceptual, on donors’ choices. And the country’s deeply ingrained sense of pluralism has favored a multiplicity of approaches to addressing poverty over singling out one in particular as a standard.
But that logic may no longer hold: A new trend within the humanitarian sector—the increasing popularity of simply transferring cash to people in need—now allows for the establishment of a charitable benchmark. In other words, before a donor gives a gift—say, to support an agricultural training program in sub-Saharan Africa, or to provide food in the aftermath of an earthquake in Pakistan—they would first ask themselves if their money would be better spent if given directly to the same aid recipients and letting them decide what to do with the money. A no-strings-attached transfer of funds may sound indiscriminate, but as a panel of development researchers from the Center for Global Development (CGD) and the Overseas Development Institute (ODI) pronounced last year, “Cash transfers are among the most well-researched and rigorously-evaluated humanitarian tools of the last decade,” and should be thought of “as the ‘first best’ response to crises.” Given that the evidence has shown cash transfers to be an inexpensive and “highly effective way to reduce suffering,” the panel’s report went on, “The question that should be asked is ‘Why not cash?’”
Those three words are together an interrogation that breaks with the foundations of modern charitable giving, placing the onus on donors, big and small, to defend the priority of their own humanitarian prerogatives against those of the people they seek to help. For some, this will be a heavy load. “The aid sector in general is bad at trusting people and reluctant to hand over power and control,” Paul Harvey, a researcher at ODI, has written. “It’s fundamentally premised on the idea of the external experts deciding what is needed and providing it. Cash challenges that.”
Researchers’ excitement about transferring money directly may make it sound like a novel strategy, but in truth its popularity marks the rehabilitation of cash relief as an accepted form of charity. For centuries, giving money directly to those in need was considered one of the quintessential acts of kindness. In Catholic tradition, almsgiving was honored not as a means of eliminating poverty but for the intimacy it established between giver and receiver. Little consideration was given to how alms were ultimately spent—it was beside the salvific point.
Yet that began to change as poverty became increasingly understood less as a providentially determined state than as a reflection of individual moral failing. From the 15th century onward, breakdowns in the social order, produced by epidemics and war, by the erosion of the feudal system, and later, by the stirrings of industrialization, led many of those with resources to regard people in need as a suspect class that threatened their own status. For authorities who sought to organize and formalize the labor market, idleness became a primary preoccupation, and throughout Europe, begging by the able-bodied was routinely criminalized, punishable by whipping or branding.
These efforts were focused on the recipients of charity, but by the 19th century, an equal amount of energy was devoted to disciplining the donors as well. In an 1889 article that came to be known as the “Gospel of Wealth,” for instance, the steel magnate Andrew Carnegie went so far as to declare “indiscriminate” almsgiving to the poor “one of the serious obstacles to the improvement of our race.” He called out the folly of a “well-known writer of philosophic books [who had] admitted the other day that he had given a quarter of a dollar to a man who approached him as he was coming to visit the house of his friend.” The writer knew nothing of the habits of the beggar or of the uses that he would make of the money; most likely, Carnegie suggested, the beggar would spend it “improperly,” on drink or some other excess. “It were better for mankind that the millions of the rich were thrown into the sea than so spent as to encourage the slothful, the drunken, the unworthy,” he wrote.
It was Carnegie’s belief that the wealthy should apply the same attributes that led to the accumulation of a fortune toward its philanthropic redistribution. That’s one of the reasons why he argued that any capitalist who raised wages was effectively frittering away an enterprise’s earnings, since workers would only squander the gains, which could have been directed toward ambitious philanthropic ends. In other words, Carnegie’s “Gospel” insisted that those furthest removed from poverty were best equipped to combat it. Over the following century, a technocratic mode of analysis echoed this version of stewardship to make a related argument: Those endowed with advanced professional training and research expertise—as in, the men and women who began to populate the humanitarian sector—should make the decisions about how charitable resources could be best used to address global poverty.
It was against this institutional backdrop that cash transfers started gaining in popularity. As early as 1981, the Nobel Prize-winning economist Amartya Sen made a moral and philosophical case for cash transfers as a response to humanitarian crises. Beginning in the late 1990s, in an effort to reduce poverty, governments in Latin America designed conditional cash transfers, contingent on recipients’ complying with some predetermined requirement, such as regular school attendance or health check-ups for their children; by 2011, 18 countries in the region had instituted such programs, and many countries in sub-Saharan Africa developed versions as well. Then, in 2004, the tsunami that ravaged nations on the Indian Ocean prompted a number of large aid agencies to experiment with cash transfers as an alternative to in-kind assistance.
Since then, an impressive base of evidence has developed around the success of cash transfers. The humanitarian sector has become more open to rigorous self-examination, with some organizations using randomized controlled trials to measure their programs’ effectiveness. The findings of some of those evaluations have put to rest one of the governing assumptions behind the centuries-long suspicion of cash transfers: that, if given additional funds, the poor would waste them on drugs, alcohol, or some other antisocial frivolity.
Instead, the research suggests, poor recipients tend to spend money responsibly, on what they think they need most. Researchers have found that a lot of in-kind aid is wasted when it doesn’t match what recipients think they need—they just sell it or give it away. In fact, a recent four-country study that compared cash transfers to food aid found that nearly 20 percent more people could be helped at no additional cost if everyone received cash. And beyond their cost-effectiveness, unconditional cash transfers promote autonomy as a good in its own right, re-prioritizing the discretion of the beneficiary, as opposed to that of the benefactor.
The popularity of cash transfers has gotten a further boost from technological advances. Ten years ago, even if a donor had a preference for cash, many countries lacked the infrastructure to deliver it expeditiously. But the rise of digital payment systems, widespread cell-phone ownership, and increased access to financial services—more than 60 percent of all adults globally have an account at a financial institution or through a mobile device—has made cash transfers a much more viable option.
This convergence of evaluative rigor and technological advances helps explain the rise of GiveDirectly, the charity that has most aggressively implemented charitable unconditional cash transfers. GiveDirectly has developed programs in Kenya, Uganda, and Rwanda, pooling donations and then electronically transferring $1,000 to each of the households it has identified as the poorest in the region. The organization then monitors the recipients and conducts randomized controlled trials to evaluate its programs. It’s a relatively simple concept, though it took a while for GiveDirectly to convince potential funders of its merits: One fundraising pitch in Silicon Valley was reportedly met with the response, “You must be smoking crack.” But investors couldn’t ignore the organization’s data, and soon GiveDirectly had attracted funds from Google and from several of Facebook’s co-founders.
Given their effectiveness, it is surprising that cash transfers still represent a “niche form of aid,” as the CGD-ODI panel’s report phrased it. The panel estimated that it now represents only 6 percent of global humanitarian relief, though this figure includes conditional and unconditional cash transfers, as well as vouchers that can only be applied to specific types of goods or services. But there is a strong push now to boost these figures. United Nations Secretary-General Ban Ki-moon’s recently called for cash-based programs to become “the preferred and default method of support” for humanitarian aid. And last year, the International Rescue Committee (IRC) announced it will “systemically default to a preference of cash over material assistance,” and committed to delivering a quarter of its humanitarian aid in the form of cash relief by 2020 (up from around 6 percent in 2015). For its part, the CGD-ODI panel did not recommend a particular percentage of cash to aim for, but argued that it should be an “ambitious” scale-up, one “an order of magnitude greater than that seen to date.”
It’s important to note that even cash transfers’ more enthusiastic boosters do not claim it as a cure-all, and insist that its appropriateness as a humanitarian or development response can still hinge on the particularities of recipients and the regions in which they live. There are still also significant implementation challenges in increasing the use of unconditional cash transfers in impoverished places throughout the world. Many areas still lack the necessary financial and technological infrastructure and have difficulty attracting the private and public investments to develop it. Even in places with the right systems, transfers are still not always possible: In 2013, after a typhoon in the Philippines damaged cell-phone towers and wiped out service in large parts of the nation, aid agencies had to deliver cash to the various islands by helicopter.
And further, there are plenty of situations in which alternative interventions prove more cost-effective than cash transfers. For example, GiveWell, a charity evaluator that has been a major supporter of GiveDirectly (and, full disclosure, to which I have served as a consultant on projects unrelated to cash transfers), hasdetermined that distributing insecticide-treated mosquito nets—which evidence suggests people do not seem inclined to buy, even if they are offered at cheap prices—is roughly five to 10 times more cost-effective in saving lives than direct unconditional cash transfers.
More generally, though, cash might hold as great a promise for the humanitarian sector as an evaluative tool as it does a means of aid. GiveWell has begun to use cash relief as a baseline against which to compare other charities. And Jeremy Shapiro, a co-founder of GiveDirectly, likens the potential of cash benchmarking to the success of index funds, which are investments that track the performance of major market indexes such as the S&P 500. Originating in the 1970s, index funds don’t require as much active oversight and thus often boast lower operating costs and advisory fees; their supporters suspected that they would outperform most actively managed funds—and they were right.
Cash transfers, Shapiro points out, are similar to index funds in that both have been proven to achieve relatively impressive “returns” with extremely low overhead. Just as the existence of an investment instrument with low fees forced actively managed funds to justify their costs, the emergence of cash transfers might function as a galvanizing counter-factual, requiring aid organizations to demonstrate that their favored approaches are doing more good than would simply giving the money to the poor directly. Cash “keeps us honest,” says Radha Rajkotia, the senior director of economic recovery and development at the IRC, adding, “It helps really home in on how we might design our programs differently so that we might reduce time and cost and be just as effective.”
Index funds are not a perfect analogy: Earnings provide a uniform standard by which to compare a diverse range of investment funds, whereas in the humanitarian arena, there is a range of outcomes that can be targeted—health, economic security, individual empowerment, to name just a few. But it’s possible to account for this variety of goals, and GiveDirectly has begun working with institutional donors to help them run randomized controlled trials to compare the impacts of cash to other development interventions. This research will allow donors to better understand how various amounts of cash can help the people they seek to assist—to ascertain, for instance, how much a child’s nutritional intake improves when his family is given $500, versus $1,000, in cash. The goal of this research is to create a sort of measuring stick by which aid agencies can determine which interventions make the most of donors’ money.
But cash can serve as a sort of moral benchmark as well. As Elie Hassenfeld, one of the founders of GiveWell, explains, weighing programs against cash transfers prompts a powerful thought experiment for charities: Would they be willing to take money from someone they’re trying to help, just so they can provide what they insist is needed? Would they, in a sense, be willing to impose a tax on poor families to fund their favored intervention? The same logic can guide individual donors as well. Asking that powerful three-word question—“Why not cash?”—can keep any potential giver honest, forcing them to think more critically, and perhaps even more humbly, about how they seek to do good in the world.
The stray dog came with bad news. This week, the U.S. Department of Agriculture announced that a dog near Homestead, Florida—a city 15 miles north of the Florida Keys—was found with wounds infested with screwworms, the much dreaded flesh-eating pest.
If you’re not familiar with screwworm, it’s because the U.S. poured millions of dollars into eradicating them back in 1982. But last fall, it reemerged in the Florida Keys, catching almost everyone by surprise. Wildlife biologists eventually found several deer on the archipelago with the parasite. Screwworms lay eggs in open wounds, burrowing into the flesh of pets and occasionally even humans. Livestock, historically, was the big economic concern. Florida still sends hundreds thousands of young calves to herds around the country each year, so a screwworm infestation could do some real damage.
“The screwworm is a potentially devastating animal disease that sends shivers down every rancher’s spine,” said Florida’s Commission of Agriculture Adam Putnam, in a statement that accompanied the official declaration of agricultural emergency last October.
The Keys infestation was bad, but at least it was somewhat isolated on the archipelago. Officials set up an animal health checkpoint at mile 106 on U.S. Highway 1, the main road that leads from the mainland to the Keys. The checkpoint would scan animals leaving the Keys—usually pets traveling with their owners—for infestation with screwworm.
It’s not clear exactly how that stray dog got infested or where it had been before it was found in Homestead. The USDA heard about the animal from a vet in the area. It has since be treated. “It’s a very treatable condition if caught early,” says USDA veterinary medical officer Robert Dickens. “The dog is doing really well.”
Individual animals can get anti-parasite drugs. But the best large-scale weapon against screwworms are sterile screwworms, deliberately sterilized using X-rays in a factory. Release enough of them and they will prevent any of the non-sterile ones from finding a mate.
This was the strategy that eradicated screwworms from the U.S. 30 years ago, and this is the strategy USDA has been using to get screwworms out of the Keys again. The USDA has released 80 million sterile screwworms across 25 sites in the Keys, and now will add Homestead to the list of release sites. On Friday, state and federal teams released sterile screwworms at Homestead, and they will continue doing so twice a week for the next six to nine weeks.
It’s still unclear where the screwworms came from. After the U.S. eradicated the pests, it partnered with countries in South and Central America, releasing sterile screwworms further and further south until they reached one of the narrowest parts of the continent, the Darien Gap in Panama. Here, millions of sterile screwworms are still dumped by the airplane-load to form an invisible but permanent sterile insect barrier.
Perhaps someone or something brought it to the Keys from further south. Perhaps it came from Cuba, Haiti, or the Dominican Republican, which have not eradicated screwworms and are just short expanse of ocean away. Wherever it came from, it eventually reached a stray dog in Florida.
More and more, superheroes are encroaching on the world of TV (though not as much as film). But there’s plenty else to look forward to on the small screen in 2017: debuts of promising original shows, new seasons of critical favorites like The Leftovers and Fargo, and the long-awaited return of Star Trek, one of television’s most famous franchises, to its original medium. Below, a list of the 20 most exciting shows in the coming year.
In the last 10 years, audiences have gotten Wicked, Oz the Great and Powerful, and a revamped The Wiz, but NBC in its infinite wisdom thinks the world needs a fresh take on The Wizard of Oz, so here’s a “dark and edgy” reboot. Directed by the visually adventurous Tarsem Singh (who made The Fall and the Snow White update Mirror, Mirror), Emerald City features Vincent D’Onofrio as the Wizard, steampunk helicopters, and a German Shepherd Toto. Your best guess as to the point of all this is as good as mine.
Though Netflix has experimented with the TV format over the years, it’s beginning to embrace the old-school multi-camera comedy as well, here revamping Norman Lear’s classic family sitcom for 2017 audiences. One Day at a Time is still a laugh-track sitcom about a single mother—Penelope (Justina Machado), a Cuban-American military veteran with a teenage daughter and “socially adept tween” son. Rita Moreno co-stars as Penelope’s grandmother, and early reviews are strong.
This miniseries comes from co-creators Steven Knight (who wrote Dirty Pretty Things, Locke, and Allied among others), Tom Hardy, and his father, credited as “Chips” Hardy. Unfortunately, Taboo is not about how Chips got his nickname—it’s an eight-episode story about a mysterious man (played by Tom Hardy) who returns to London in 1814 after years in Africa, seeking revenge on the East India Company for his father’s death.
Fans largely derided the 2004 film adaptation of Lemony Snicket’s series of dark children’s novels. But an eight-episode Netflix series might prove a better medium for translating the 13-book series, with author Daniel Handler (who penned the books under the Snicket pseudonym) scripting. Neil Patrick Harris stars as the villainous Count Olaf, with Joan Cusack, Patrick Warburton, and Alfre Woodard among the supporting cast.
Here’s the pitch: Lenny Belardo (Jude Law) is the new pope (the first American in history to hold the position), and boy, is he young. Created by the operatic Italian director Paolo Sorrentino (behind such films as The Great Beauty and Youth), The Young Pope isn’t going for subtlety, but it has already aired in Italy and the UK to critical acclaim. HBO is positioning the show as its big winter prestige drama, and if nothing else, it looks like opulent fun. Diane Keaton co-stars as Lenny’s mentor and personal secretary.
A millennial revamping of the Archie Comics universe, which features a surprisingly buff Archie Andrews (K.J. Apa), a “philosophically bent” Jughead (whatever that may mean), a mysterious past for Cheryl Blossom, and Luke Perry as Archie’s grizzled dad. Betty and Veronica are on board too, of course. As silly as some of the details might sound, early word on the show is strong, perhaps because it was developed by the veteran comic-book and TV writer Roberto Aguirre-Sacasa.
Thursday on NBC
Premieres February 2
Unlike Marvel’s attempts at television tie-ins to its superhero movies, Powerless (which is connected to the DC comics universe) is a show about ordinary folks trying to live their lives around the chaos of Superman, Batman, and their ilk saving lives. Vanessa Hudgens and Danny Pudi star as office drones working in the bowels of Bruce Wayne’s company, with Alan Tudyk as their tyrannical boss. Essentially, it’s a workplace sitcom, just with superheroes whizzing around in the background.
Santa Clarita Diet
Friday on Netflix
Premieres February 3
Sprung from the truly underrated comic mind of Victor Fresco (Andy Richter Controls the Universe, Better Off Ted), Santa Clarita Diet might be his first series to not suffer an untimely cancellation despite critical acclaim, thanks to the less ratings-crazed honchos at Netflix. Drew Barrymore and Timothy Olyphant star as real-estate agents getting divorced, whose lives then take an unspecified “dark turn.”
After eight seasons, a revival, and a TV movie, Kiefer Sutherland’s Jack Bauer is officially retired, but Fox is reviving the real-time action of 24 yet again, focusing on a new agent at the fictional Counter-Terrorism Unit. Eric Carter (Corey Hawkins), an ex-Army Ranger, is sure to get sucked into many a deadly conspiracy, with Miranda Otto and Jimmy Smits among the ensemble he has to protect. Will the show work without Kiefer screaming into his cellphone?
Viewers are getting another superhero spin-off show, but this one’s adjacent to the X-Men universe and based on the character Legion, a schizophrenic who realizes there may be more to his diagnosis. Legion is a tricky character to get right, but the show is from the talented Noah Hawley (Fargo) and has a terrific cast, including Dan Stevens (Downton Abbey), Aubrey Plaza (Parks and Recreation), and Fargo alumnus Jean Smart.
Much ink has been spilled about Girls since its debut in 2012, but this is everyone’s last shot to weigh in on Hannah Horvath (Lena Dunham) and company’s antics—season six will be the last. These final 10 episodes will surely air free of controversy and nary a hot take will be written about them, especially ones about what the show has meant for millennial culture.
One potential successor for Girls is another HBO quasi-sitcom shepherded to screen by comedy vet Judd Apatow. Pete Holmes, a brilliant stand-up who has long been in search of the right TV project (his TBS late-night talk show was canceled too soon), created Crashing and based it on his own life in the stand-up world. It follows Pete (Holmes, playing himself), who’s trying to cobble together a living as a comedian while dealing with the collapse of his marriage.
A spinoff of long-running legal drama The Good Wife, this show is being used to launch CBS’s new streaming service CBS All Access, premiering on the TV network before bouncing over to the online subscriber-only network. Set a year after The Good Wife’s finale, The Good Fight follows Diane Lockhart (Christine Baranski) as she moves to a new Chicago law firm with her former colleague Lucca Quinn (Cush Jumbo) and her goddaughter Maia (Rose Leslie of Game of Thrones). Delroy Lindo co-stars as a rival attorney.
The Netflix Marvel universe continues to build out with its most problematic superhero—Danny Rand (played by Finn Jones of Game of Thrones), a billionaire industrialist who has been missing for 15 years and returns to New York proficient in kung fu. Though Rand (created in the ’70s during the first pop-culture boom for martial-arts movies in the U.S.) has always been written as white, he may be an uncomfortable sight for Marvel in 2017, especially after the whitewashing controversy over its film Doctor Strange.
The Handmaid’s Tale
Wednesday on Hulu
Premieres April 26
Margaret Atwood’s 1985 masterpiece of speculative fiction, set in a totalitarian theocracy where women’s rights have been erased by a religious movement that seized power in the United States, seems quite relevant to the contemporary political mood, and has never gotten a proper adaptation. Hulu is positioning this 10-episode series as its prestige drama of the spring, with Elisabeth Moss starring as Offred; Joseph Fiennes, Samira Wiley, and Yvonne Strahovski co-star.
Fargo Returns this spring on FX
No official premiere date has been set for the third season of Fargo, but the cast is as stacked as ever, with Ewan McGregor playing dual lead roles, and Carrie Coon (The Leftovers), David Thewlis, Michael Stuhlbarg, and Mary Elizabeth Winstead all involved. Unlike the last season, the show will be set in the near-present day (2010, according to reports), opening up the possibility that characters from its first season could return (though the creator Noah Hawley is keeping quiet about any plot details).
Returns this April on HBO
In its second season, this TV adaptation of Tom Perrotta’s novel The Leftovers built on its promise to become one of the best, most fascinating, confounding shows on television. Perhaps cognizant that his last great show (Lost) lasted a little too long, co-creator Damon Lindelof is ending the show this year, shifting the action to Australia but keeping the entire main cast (minus Ann Dowd’s dearly departed Patti Levin). The show will struggle to top season 2’s episode “International Assassin,” but whatever Lindelof and Perrotta do come up with is sure to be fascinating.
A much-hyped adaptation of Neil Gaiman’s 2001 novel, focusing on Shadow Moon (Ricky Whittle), an ex-con recruited by the mysterious Mr. Wednesday (Ian McShane), who is recruiting the world’s forgotten gods (to say more would be spoiling). Adapted by Bryan Fuller (Hannibal) and Michael Green (Kings), the series, which was initially in production at HBO, promises to be a suitably epic take on Gaiman’s writing. (The first season will cover only the first third of his novel).
Star Trek: Discovery
Premieres this May on CBS All Access
The first Star Trek show since Enterprise concluded in 2005, Discovery has a heavy fan burden to shoulder, but an exciting cast and crew aboard. Set 10 years before the original Star Trekseries, Discovery will not take place in the world of the recent rebooted film series, instead charting the journey of the USS Discovery after weathering a much discussed but secret “incident” in Trek lore. Sonequa Martin-Green (The Walking Dead) stars as Rainsford, the ship’s second officer (the first non-commanding officer to headline a Trek series). Like The Good Fight, this show will premiere on CBS then move to its All Access streaming network.
Game of Thrones
Returns this summer on HBO
Knocked from its usual April perch by a filming schedule that demanded more time in snowier climes (winter has, after all, come to Westeros), the seventh and penultimate season of Game of Thrones will arrive at some point this summer on HBO for an abridged seven episodes. Will George R. R. Martin finish his next book before then? Will Jon Snow and Daenerys Targaryen become the best of friends? Will Cersei Lannister’s quest for power finally end in glory? All that, and much more, as HBO tries to squeeze as much life out of its big ticket as possible before the whole shebang wraps in 2018.
In the brushfire wars since Donald Trump won the presidency, skirmishes over how to speak to his coalition of voters have consumed liberals. Leading the vanguard in those conversations is a collection of writers and thinkers of otherwise divergent views, united by the painful process of reexamining identity politics, social norms, and—most urgently—how to address racism in an election clearly influenced by it. Though earnest and perhaps necessary, their emphasis on the civil persuasion of denizens of "middle America" effectively coddles white people. It mistakes civility for the only suitable tool of discourse, and persuasion as its only end.
This exploration of how to best win over white Americans to the liberal project is exemplified by reactions to Hillary Clinton’s placing many of Donald Trump’s supporters in a “basket of deplorables.” The debate about whether to classify these voters as racist or bigoted for supporting a candidate who constantly evinced views and policies many believe to be bigoted is still raging. As Dara Lind at Vox expertly notes, Clinton’s comments themselves were inartful precisely because they seemed focused solely on “overt” manifestations of racism, like Klan hoods and slurs. That focus ignores the ways in which white supremacy and patriarchy can function as systems of oppression, tends to forgive the more refined and subtle racism of elites, and may ultimately lead to a definition of racism in which no one is actually racist and yet discrimination remains ubiquitous.
At New York, Jesse Singal offered one of the most-thorough critiques of Clinton for alienating white Americans with blunt language. His colleague Drake Baer recently made a similar point, arguing that even if racism is systemic and white supremacy does actually endow advantages to millions of Americans, saying that plainly threatens to stifle constructive debate, because people will object to being labeled that way. For Singal and Baer, getting white Americans to join a diverse coalition and supporting initiatives for societal equality involves implicit appeals and cajoling, rather than explicit call-outs. Both of their arguments rely on a body of psychological research that supports the practicality of finesse in interpersonal conversations in changing individual views.
Mother Jones’s Kevin Drum argues that both the broadening of the definition of racism, and the actual application of the label stifle debate. Drum writes:
It's bad enough that liberals toss around charges of racism with more abandon than we should, but it's far worse if we start calling every sign of racial animus—big or small, accidental or deliberate—white supremacy. I can hardly imagine a better way of proving to the non-liberal community that we're all a bunch of out-of-touch nutbars who are going to label everyone and everything we don't like as racist.
Taken independently, these conclusions seem reasonable, though they are not backed by hard evidence. Perhaps broad definitions of racism and white supremacy really do muddle conversations, especially among people without the same level of critical understanding.
My colleague Conor Friedersdorf adds another layer to the debate in discussing a black social-media user’s critique of Drum’s argument, in which he points out that the usage of stigma and shame in disagreements among liberals about definitions of racism and white supremacy predicts their larger inability to build coalitions with people outside of the liberal experiment. “The coalition that opposes Trump needs to get better at persuading its fellow citizens and winning converts, rather than leaning so heavily on stigmatizing those who disagree with them,” Friedersdorf writes. “Among other problems with wielding stigma, it doesn’t work.” This again is a very reasonable and well-intentioned call for civility in discourse, especially in dealing with racism.
In the aggregate, though, these calls for civility threaten to impose a burden on people of color. If calling out racism is largely counterproductive, using a systemic definition like white supremacy is also unacceptable, and stigmatizing or shaming those who espouse racist beliefs is self-defeating, what tools remain? The only form of productive debate that people of color can engage in, it seems, is the gentle persuasion of white people who may or may not hold retrograde views.
That advice is of course probably most appealing to white Americans, for whom the social cost of being called racist may loom larger than the effects of racism itself, or for whom the ideal of a functioning marketplace of civil ideas is more important than the worry that they might be carved out of it. White Americans share a vested interest in not being called racist, straight people in not being called homophobic, and men in not being called misogynistic. Arguments in favor of civility cede valuable rhetorical ground by default and coddle people who may well know the score about their own views. Skepticism should be a default position here; instead, the bigoted views of individuals are privileged as artifacts of ignorance, and thus not considered as purposeful efforts to sabotage debate.
Minorities may suffer from racism and bigotry that goes well beyond incivility, but these arguments urge that it is their job in debate to remain civil, because that is the only productive way to reach across the aisle. As Singal notes, this is useful advice because racism is not always accepted as any single thing, and the parameters of the debate often depend on whether or not a belief or characteristic is actually racist. Americans each probably differ in our definitions of and tolerance to racism. Also, research cited by Baer and Singal finds that white people respond to being called “racist” in a way that resembles receipt of a slur, and in some experiments changed their biases in response to cooperation as opposed to heated debate. As they note, research also supports Friedersdorf’s claim that stigma may not be useful in one-on-one interactions.
But there are limits to the conclusions of those studies, and they are complicated by other evidence. For one, a 2013 study from researchers at the University of Wisconsin-Madison finds that constant explicit disclosure of an individual’s biased views—albeit in a scientific setting and based on an assessment—are vital in diminishing bias, as are constant interactions with stereotyped groups and targeted information. In real-world settings, research indicates that those constant interactions are made more difficult given that the very presence of minority people around white people increases bigoted views. The research Singal and Baer cite also seems to mostly revolve around episodic interpersonal interactions, and not necessarily around the complex sociological processes by which social mores are made, enforced, and internalized.
But even if we do assume that levying claims of racism and shame is counterproductive in persuading white people to join diverse coalitions, there is another suspect claim at work here: that persuasion is the sole end-goal for argument.
For people who suffer the incivil burden of bigotry, that claim doesn’t quite hold up. Sometimes the goal of argument is to vent. Sometimes it is to simply tell the truth. Sometimes it’s just to loudly proclaim one’s own humanity. The general burden to always remain civil in arguments—even if it means coddling white egos and casting a blind eye to obvious bigotry—can even create that need to commit to truth-telling at any cost. Civil discussions with people who themselves may have already breached the bounds of civility are difficult.
One way around that difficulty for marginalized folks is abandoning civility. The labels of racism and bigotry can impose a social cost on bigoted actions, policy preferences, or speech, regardless of whether hearts or minds are changed. Stigma can be useful.
Further, the goal of arguments isn’t necessarily to directly change one opponent, but often to convince onlookers and create social incentives. Such was the gist of Clinton’s statement: She was not intending to convince Trump supporters to not be bigoted, but to draw people who see themselves as opposed to bigotry into her corner. Motivated candidates and institutions can create social conditions and stigmas by which bigotry is diminished, and they also change the way in which media transmit information and people absorb it. Imagine if the same outrage manifest in media coverage about the ideas of microaggressions and safe spaces pioneered by marginalized people had been marshaled against stubborn implicit racial biases and resistance to multiculturalism among whites, or if the useless term “racially charged” in media descriptions of racist things had been replaced with something more potent, like “racist.”
The main thing that this debate could use is a discussion of the effects of rigorously calling out racism on people who suffer from the effects of racism. In the vein of W.E.B. Du Bois’s thought, and Ida B. Wells anti-lynching work, perhaps there is an element of empowerment among people of color in calling out racism. Part of this is the effect of stigma itself: Stigmatization and appeals to moral rightness are among the most effective ways to seize power when dispossessed of it. But also, calling out racism aids its victims in understanding the powers at play in their own lives, and is the foundation of solidarity for many people of color. There is a reason why movements like the civil-rights movement and Black Lives Matter that have had dramatic impacts on the course of American history have developed around rather vivid and unflinching call-outs of white supremacy and racism, even leveled against their own white members.
The movements and empowerment built around calling out racism are what give activists the vocabulary to disassemble it, regardless of whether they choose to use the tactics of civility in individual conversations or not. The ultimate irony of Drum’s objection to expanding definitions of white supremacy is that it took decades of open emotional appeals by black people to persuade—and perhaps stigmatize—the country into believing that segregation, disenfranchisement, and other actions of “real” racists, were in fact racist. Given the objections to incivility that Wells, Martin Luther King, Jr., and other black leaders faced generations ago, it is rather clear that incivility watered the rhetorical ground on which both sides of the debate over racism today stand.
Those concerns among communities of color seem to muddle Singal’s conclusion, focused as it is on psychological rather than sociological analysis, and on the reactions of recalcitrant white people rather than the transformative development of people of color. Maybe, in a limited sense, Singal is right: White Americans can be persuaded to join the liberal project by individual interlocutors jettisoning identity politics and abandoning moralizing about racism. But maybe incivility can be used to empower people of color, establish social penalties for racism, and change social mores and modes of mass communication, which all in the aggregate could push white society towards inclusion and away from bias. Or perhaps calling out racism just helps people of color cope with racism.
Civility is not the highest moral imperative—especially in response to perceived injustices—nor is hand-holding and guiding reluctant people to confront their bigotry gently. American history is full of fights, including the ongoing struggle for civil rights, that have been as fierce as they are ultimately effective. Civility is overrated.
An almost-full, half-pie, waxing moon hanging lopsided in the night sky has long been a symbol of things to come. Now scientists have a new symbolism for the lunar phase we call first quarter: a looming risk of earthquakes.
The moon is (mostly) responsible for Earth’s tides, which are strongest when the sun and moon are aligned, during a full moon or a new moon. It’s small, the moon, but so close that its gravity stretches and compresses the water across the globe, into high and low tides called spring and neap tide, respectively. It pulls on the Earth’s crust, too, but only a tiny bit, especially compared to the breath-like rise and fall of an ocean.
Still, scientists have wondered for years whether the moon might play a role in earthquakes, which are essentially movements of the Earth’s crust atop its mantle. It would make sense that the moon’s gravity could tug at a fault in the crust, especially one that is already close to failing and slipping. But going back to the 1800s, nobody had demonstrated firm evidence for this. A new study gets closer to drawing this link.
Studying data from the past two decades, Satoshi Ide and colleagues from the University of Tokyo measured the timing of high tides and reconstructed the amplitude of the moon’s pull at those times, focusing on the two weeks prior to large earthquakes. They measured the amplitude of the tides against the timing of those quakes, and found some of the largest and most devastating earthquakes in recent memory happened when the Earth’s crust was under the highest tidal stress.
Ide and colleagues noticed the Dec. 26, 2004 Sumatran earthquake, most notable for its horrendous, deadly tsunami, occurred near the time of full moon and spring tide. So did the Feb. 27, 2010 temblor in Maule, Chile. These quakes both happened close to the peak of tidal stress, when the moon and sun teamed up to exert the greatest gravitational influence over Earth. The March 11, 2011 Tohoku-Oki earthquake in Japan, which caused that country’s devastating tsunami, occurred during the neap tide, but the tidal stress was high at that time.
The study couldn’t find any correlation between the tides and small earthquakes, but previous research has suggested there’s a link there, too. Nicholas Van Der Elst, a seismologist with the US Geological Survey, published a study in July that looked at low-frequency earthquakes in the notorious San Andreas fault, and found they were more likely to occur during the moon’s waxing phase—which we’re in at the moment—when the tide increases in size at the fastest rate.
The mechanisms underlying this connection are not clear, however. The moon’s pull causes tidal disruptions that are orders of magnitude lower than those experienced in an earthquake. And not every change in tide comes with an attendant earthquake. Part of the problem is that scientists still don’t know exactly what causes a major earthquake. But one theory holds that they begin as smaller fractures that build up via a cascading process.
“We know from studying rock friction in the laboratory that the fault does not go from locked up to sliding in an instant. It can take hours, days, or even longer for the fault to really come un-glued, even when the stress has exceeded the supposed strength,” says Van Der Elst.
The deep tremors that can lead to major earthquakes can be very sensitive to tidal stress changes, according to Ide and colleagues. “The probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels,” they say.
In other words, during a new moon or full moon, a small increase in tidal stress might be enough to encourage a very small fracture into a major earthquake.
But that’s only if the research holds up, as seismologists continue to pore over larger data sets. Ide and colleagues point out that at least three other quakes in November 2006, January 2007 and September 2007, were not correlated with times of large tidal stress. “What about the next few earthquakes? We will have to wait and see,” says Van Der Elst.
Not all large earthquakes are caused by the moon’s movements. But some of them might be, and so we’d do well to pay closer attention to the subtle yet powerful ways in which the moon exerts its influence on our planet, especially in regions prone to earthquakes. Scientists may have use for ancient omens yet.
In 1690, English philosopher John Locke claimed that “brutes abstract not.” By “brutes,” he meant our fellow animals. By “abstract,” he meant pulling out general concepts from specific examples. While humans might look at chalk, snow, and milk, and conceive of a property called whiteness, other animals would not. “The having of general ideas is that which puts a perfect distinction betwixt man and brutes,” Locke said. “They are the best of them tied up within those narrow bounds, and have not (as I think) the faculty to enlarge them.”
Mm-hmm. Try telling that to Alex Kacelnik’s ducklings.
Within hours of hatching, ducklings and many other young birds rapidly learn the traits of their mothers. This process, known as imprinting, is one of the fastest and most reliable forms of learning in nature. Typically, it ensures that the vulnerable youngsters follow the right individuals around. But ducklings can also mistakenly imprint on birds of the wrong species, humans, or even inanimate moving objects like bouncing balls.
“In a way, imprinting appears to be simple,” says Kacelnik, who studies animal behavior at the University of Oxford. “But it’s extremely complex because mum is an extremely complex collection of properties. So what is it that a young animal stores in its brain to recognize the identity of its mother?”
A devotee of Locke would argue that the ducklings are just picking up simple traits—perhaps a smell, sound, color, or shape. But Kacelnik and his student Antone Martinho III showed that they can do more. The duo presented newborn ducklings with pairs of objects that were either identical or different in shape or color. And they found that the birds could learn these traits. They weren’t imprinting on a specific shape or color, but on the concepts of “same” or “different.” They were looking beyond the individual objects to think about how they are related. In short, “they were abstracting properties,” says Kacelnik.
“I’m very impressed with the results, and the fact that after so many studies on imprinting, nothing like this has been done before,” says Nathan Emery from Queen Mary University of London, who studies bird behavior and was not involved in the research. “I think this has got the potential to revolutionize our field and will be discussed for years to come.”
This is far from the first blow to the idea that abstract thought is a human-only skill. Researchers have shown that other primates, including chimps and monkeys, can discriminate collections of the same items from sets of different ones—a skill that has been described as “the keel and backbone of thought and reasoning.” Rats can tell same from different too, as apparently can pigeons, parrots, crows, and even bees.
But in almost all of these experiments, the animals were extensively trained. They saw many combinations of objects, which were variously paired with rewards, before being tested. Martinho and Kacelnik did nothing of the kind. They hatched mallard eggs in the dark and, within an hour, ushered the newborn ducklings straight into an experiment.
Put yourself in the ducklings’ shoes. You leave the dark and enter a white arena with a pair of hanging objects—a red cone and a red cylinder—revolving around the centre. You study them for 25 minutes before being herded back into the dark waiting room. When you reenter the arena, there are now two pairs of objects. In one corner, two pyramids revolve around each other. In the other, a cube goes round a cylinder. Neither pair exactly matches the one you saw before. But the cube and cylinder have a familiar property—they’re different from each other. So you waddle over to them.
Around 68 percent of the young birds headed towards the relation that they had seen before. If they had imprinted on one identical pair of objects, they were more likely to prefer a second identical pair over a different one. But if they imprinted on mismatched objects, they preferred other mismatched pairs over identical ones. “Their brains are accumulating a catalog of properties that could apply to other objects,” says Kacelnik.
As Edward Wasserman from the University of Iowa notes in a related commentary, the ducklings “needed to see those relations exemplified by only a single pair of stimuli.” In that sense, they actually outperform human babies. Seven-month-old infants can also pick up concepts of same or different without any supervision, but only after seeing at least four pairs of objects. The ducklings did it in one.
But Jennifer Vonk from Oakland University argues that the ducklings might just have been treating the two objects as one, rather than considering the relations between them. “They may have simply seen one color versus two colors rather than a relationship of sameness or difference,” she says.
Kacelnik says that’s unlikely because the objects were constantly moving and ducklings were seeing them from ever-changing perspectives. But “we do need to investigate in greater detail exactly what’s going on,” he says. “It’s not a finished story.”
Does this mean that ducklings are highly intelligent? Probably not. It might be that we’ve overestimated how important abstract concepts are to intelligence. “We make so many assumptions about which cognitive abilities are more or less complex than others, without actually knowing much about the underlying mechanisms,” says Emery. “We now have to question whether relational concepts should be still be considered a form of complex cognition if they can be programmed into very young brains.”
Perhaps there are different levels of abstract concepts, from simple ones that young birds can quickly learn after limited experience, to complex ones that adult birds can cope with. And perhaps birds pick up these concepts in different ways from other animals.
“Both humans and birds can generalize these abstract relations quickly, but are they using the same mechanisms to do so, or different mechanisms that lead to the same ability to learn relations?” adds Alissa Ferry from the International School for Advanced Studies. “We know there are certain situations that can help or hinder humans from learning abstract relations and it is an open question if we find the same patterns in birds as well.”
These are questions that Kacelnik will pursue. “There’s been a transformation in the last few decades of what we think animals are capable of,” he says. “We think we’re starting to understand better what it is to be a bird.”
I'm sure you've already seen this, but just in case.
A former Stanford swimmer who sexually assaulted an unconscious woman was sentenced to six months in jail because a longer sentence would have “a severe impact on him,” according to a judge. At his sentencing, his victim read him a letter describing the “severe impact” the assault had on her.
On summer weekends, as shadows stretch over the fresh-mown grass, and the flag hangs torpid in the swelter, it’s only natural to want to share a beer with your dog.
At least, that’s how the ad copy might read—although politically correct “veterinarians” will tell you that dogs should never be given beer, because their livers don’t metabolize alcohol in the same way humans’ do. Other ingredients like hops, too, can reportedly “cause violent reactions in many canines.” It’s clear from information at www.canigivemydog.com/beer that “no matter how much they beg, alcoholic beverages should be off limits to pets.”
So it’s none too soon that this month a company called Woof and Brew released a beer specifically for dogs.
It has no alcohol, or hops, or carbonation, but it’s otherwise meant to recreate the beer experience as closely as possible in a way that’s safe for the digestive tracts of dogs. The formula contains barley malt, dandelion, flax, and “chicken flavoring”—like so many good beers.
The point is not to mimic the flavor, though, but to foster social bonding and ritual, explains Steve Bennett, managing director of the U.K.-based Woof and Brew, “so that human can share a beer of their own and a beer for their dog as well.” This is Bennett’s first attempt at dog beer, though the company has been making herbal teas and tonics for dogs since 2013.
“There’s a huge move to more healthy products for dogs—in tandem with the human move to more healthy products,” Bennett said. His company has an eye to new global distribution partnerships for its dog drinks. “I think all of this is linked to the humanization of our pets, and ensuring that what we feed them is healthy, not just something we’ve designed for ourselves and hope is healthy for dogs.”
This carries a sense of urgency and legitimacy in that, just like humans, he notes, obesity and diabetes are pervasive problems.
Is there an herbal drink that's supposed to help with those?
“No,” he explains. But the company is coming out with an “anxiety blend” that has lavender and rose petal, which he believes are both very good for calming dogs—during fireworks, for instance, or when traveling. In addition to these isolated triggers, The Merck Veterinary Manualnotes that dogs can also be diagnosed with “a more generalized anxiety, in which the fearful reaction is displayed in a wide range of situations to which a ‘normal’ pet would be unlikely to react.” And as it becomes common for such dogs to be prescribed anxiety medications like Xanax and Valium, it’s predictable that a market for non-pharmaceutical products would arise.
But just as in humans, this beer is not recommended as a treatment for anxiety.
“It's very much to enjoy time with your dog,” Bennett emphasized.
Though this, in its own way, has merit in alleviating anxiety. It’s certainly a more appealing way of bonding than lapping rainwater out of a human version of a dog bowl.
So while the dog beer may not be inherently healthy, time spent bonding with a pet is clearly valuable to the health of humans. Last year in Science, for instance, researchers reported that simply gazing into a dogs eyes produced an oxytocin surge similar to when a parents bond with infants. Animal therapy programs have been shown to improve health outcomes inside and outside of hospitals. Approaches like these are worth emphasizing in a health economy obsessed with high-tech solutions and wanting for social mechanisms to decrease stress that can seem beyond our control.
“Is this going to compete with a headline about Trump?” Bennett asked with a laugh from across the ocean.
I said I was sure it would compete with a lot of headlines.
He regained composure, steering the conversation back to American politics. “It’s embarrassing,” admonished the man who makes beer for dogs.
What if it were possible to choose your dreams every night? The Kiev-based company Luciding claims that its signature headset, called the LucidCatcher, puts you in control of your dreams by administering a minor electrical shock to the brain during REM sleep. MEL films toured the company and created this short documentary, Kiev Dreamers, to highlight the growing industry centered around escapism in Ukraine. “Many people are trying to somehow escape reality,” says Luciding CEO Nikita Antonov of the current climate in the country. “But they are doing it through drugs and alcohol, and we are opposed to that.” To see more work from MEL, visit their website and Vimeo page.
Just use this as your ad all the time, Hillary. No additional commentary necessary.
Nearly everyone is expecting that a presidential campaign pitting Donald Trump against Hillary Clinton would be unusually nasty. Already, Trump has made nasty comments about the presumptive Democratic nominee and her husband.
But he didn’t always feel that way.
Speaking on Fox News in 2012, Trump heaped extravagant praise on Hillary Clinton and her husband, and the context was the prospect of Hillary running in 2016:
Here’s how he put it:
Hillary Clinton I think is a terrific woman. I’m a little biased because I’ve known her for years. I live in New York. She lives in New York. And I’ve known her and her husband for years and I really like them both a lot. And I think she really works hard. And again, she’s given an agenda that’s not all of her. But again, I think she really works hard. I think she does a good job. And I like her.
Many Trump supporters feel that their candidate is the only one who says what he really thinks and tells it like it is. That cannot be true, for the simple reason that so much of what Trump says now is directly contradicted by what he said in the recent past.
Today, he says, Hillary is an incompetent failure who was complicit in her husband’s abuse of woman. A few years ago, the Clintons were terrific people and Hillary was a hard worker who does a good job. Trump flip-flops exactly like every other opportunistic politician.
This was one of the most depressing things I've read in a long time.
Alejandro Nieto was killed by police in the San Francisco neighborhood where he spent his whole life. Solnit examines the case surrounding his death and the disintegration of the communities displaced by "disruption."
At 235 feet below sea level, the Salton Sea occupies what was once known as the Salton Sink, a forbidding sunken expanse of ancient dry lake bed with a black mud sub-surface that made crossings treacherous. The area was generally avoided until the end of the 19th century, when land developers realized that the area’s alluvial soil and the hot climate would, with irrigation from the nearby Colorado River, produce valuable farmland. A series of canals were built and water flowed in; soon, more than 10,000 farmers and farm workers relocated to the Salton Sink, now grandly rechristened the Imperial Valley, and quickly put 100,000 acres of land under cultivation.
In the spring of 1905, following extreme rains, the Colorado River flooded and blew out a weakly constructed irrigation canal. All efforts to seal the breach failed—for 18 months, the river continued to flood into the Salton Sink, filling it up with fresh water like an enormous shallow tub [covering nearly a thousand square miles of land. ...]
In the 1950’s, with the rising popularity of the nearby desert resort of Palm Springs, developers once again saw opportunity in the Salton Sea. Towns like Salton City and Bombay Beach cropped up along its shoreline, along with resorts catering to tourists interested in water sports, fishing, and swimming. Meanwhile, fish that had been introduced to the lake were flourishing, and by the late 1950’s the Salton Sea was the most productive fishery in California. At its peak, the Salton Sea was drawing 1.5 million visitors annually, more than Yosemite.
Unfortunately, little thought and few resources were devoted to the management of this accidental lake. As a terminal lake, the Salton Sea lacks any outflow, and in the late 1970s a series of heavy tropical storms caused the water level to rapidly rise and flood its banks. The surrounding towns and businesses were severely damaged, many beyond repair, and tourism began to shift away. In the 1990s the lake began to recede dramatically, stranding many of the remaining residences and businesses, as changing water-management priorities diverted more water from agricultural areas to cities. [...]
By the early 2000s, it had become clear that the lake was headed for disaster: The agricultural runoff that sustained the lake contained not only fertilizer and pesticides, but high quantities of salt. Over the years, the salinity rose enough to kill off the lake’s fish species, even salt-water fish.
In response to the video, our own James Fallows shared a personal connection with the doomed Salton Sea:
It is particularly interesting/alarming for me, because I could well have been in one of those shots from the Leave it to Beaver era. Several times in the late Fifties and early Sixties my dad would dragoon the rest of us for the broiling drive down through the desert to Salton City and neighboring developments, with the fantasy that it could be a “good investment” for the family to buy a lakeshore lot there. Thank goodness he never followed through.
“But I can promise that a year from now, when I no longer hold this office, I’ll be right there with you as a citizen — inspired by those voices of fairness and vision, of grit and good humor and kindness that have helped America travel so far.”
Well. I’m sure Obama is very inspired by all those voices, and I am sometimes too, but I would like to know which voices in particular are counseling him about one of the great issues of our time: dog pants.
And not just any issue around canine locomotive fashion, but the great one: If a dog wore pants, how would he wear them?
As I wrote last month, and as all right-thinking people agree, the four-legged pants are correct. Dogs have four legs, and pants cover all your legs, ergo: Dog pants should be four trousered, and the lefthand image wins.
“You gotta go with this,” Obama said immediately pointing to the diagram featuring a drawing of a dog with pants on just its hind-legs. “This, I think, it’s a little conservative…too much fabric,” he said about the pair that would cover all four legs.
In Aviemore, northern Scotland, huskies and sledders are now preparing for the 33rd annual Aviemore Sled Dog Race, organized by the Siberian Husky Club of Great Britain. The race is run by over 1,000 sled dogs pulling 250 mushers around Loch Morlich, in the Cairngorm mountains. Gathered below are images of the race and participants from recent years.
At 6.30 a.m. on Thursday, October 29, 2009, Friederike Meckel Fischer’s doorbell rang. There were 10 policemen outside. They searched the house, put handcuffs on Fischer—a diminutive woman in her 60s—and her husband, and took them to a remand prison. The couple had their photographs and fingerprints taken and were put in separate cells in isolation. After a few hours, Fischer, a psychotherapist, was taken for questioning.
The officer read back to her the promise of secrecy she had each client make at the start of her group-therapy sessions:
I promise not to divulge the location or names of the people present or the medication. I promise not to harm myself or others in any way during or after this experience. I promise that I will come out of this experience healthier and wiser. I take personal responsibility for what I do here.
“Then I knew I was really in trouble,” she says.
The Swiss police had been tipped off by a former client whose husband had left her after they had attended therapy. She held Fischer responsible.
What got Fischer in trouble were her unorthodox therapy methods. Alongside separate sessions of conventional talk therapy, she offered a catalyst to help her clients reconnect with their feelings, with people around them, and with difficult experiences in their lives. That catalyst was LSD. In many of her sessions, they would also use another substance: MDMA, or ecstasy.
Fischer was accused of putting her clients in danger, dealing drugs for profit, and endangering society with “intrinsically dangerous drugs.” Such psychedelic therapy is on the fringes of both psychiatry and society. Yet LSD and MDMA began as medicines for therapy, and new trials are testing whether they could be again.
* * *
In 1943, Albert Hofmann, a chemist at the Sandoz pharmaceutical laboratory in Basel, Switzerland, was trying to develop drugs to constrict blood vessels when he accidentally ingested a small quantity of lysergic acid diethylamide, or LSD. The effects shook him. As he writes in his book LSD, My Problem Child:
Objects as well as the shape of my associates in the laboratory appeared to undergo optical changes … Light was so intense as to be unpleasant. I drew the curtains and immediately fell into a peculiar state of ‘drunkenness,’ characterized by an exaggerated imagination. With my eyes closed, fantastic pictures of extraordinary plasticity and intensive color seemed to surge towards me. After two hours, this state gradually subsided and I was able to eat dinner with a good appetite.
Intrigued, he decided to take the drug a second time in the presence of colleagues, an experiment to determine whether it was indeed the cause. The faces of his colleagues soon appeared “like grotesque colored masks,” he writes:
I lost all control of time: Space and time became more and more disorganized and I was overcome with fears that I was going crazy. The worst part of it was that I was clearly aware of my condition though I was incapable of stopping it. Occasionally I felt as being outside my body. I thought I had died. My ‘ego’ was suspended somewhere in space and I saw my body lying dead on the sofa. I observed and registered clearly that my ‘alter ego’ was moving around the room, moaning.
But he seemed particularly struck by what he felt the next morning: “Breakfast tasted delicious and was an extraordinary pleasure. When I later walked out into the garden, in which the sun shone now after a spring rain, everything glistened and sparkled in a fresh light. The world was as if newly created. All my senses vibrated in a condition of highest sensitivity that persisted for the entire day.”
Hofmann felt it was of great significance that he could remember the experience in detail. He believed the drug could hold tremendous value to psychiatry. The Sandoz labs, after ensuring it was non-toxic to rats, mice, and humans, soon started offering it for scientific and medical use.
One of the first to start using the drug was Ronald Sandison. The British psychiatrist visited Sandoz in 1952 and, impressed by Hofmann’s research, left with 100 vials of what was by then called Delysid. Sandison immediately began giving it to patients at Powick Hospital in Worcestershire who were failing to make progress in traditional psychotherapy. After three years, the hospital bosses were so pleased with the results that they built a new LSD clinic. Patients would arrive in the morning, take their LSD, then lie down in private rooms. Each had a record player and a blackboard for drawing on, and nurses or registrars would check on them regularly. At 4 p.m. the patients would convene and discuss their experiences, and then a driver would take them home, sometimes while they were still under the influence of the drug.
Around the same time, another British psychiatrist, Humphry Osmond, working in Canada, experimented with using LSD to help alcoholics stop drinking. He reported that the drug, in combination with supportive psychiatry, achieved abstinence rates of 40–45 percent—far higher than any other treatment at the time or since. Elsewhere, studies of people with terminal cancer showed that LSD therapy could relieve severe pain, improve quality of life, and alleviate the fear of death.
Studies of people with terminal cancer showed LSD could relieve severe pain, improve quality of life, and alleviate the fear of death.
In the U.S., the CIA tried giving LSD to unsuspecting members of the public to see if it would make them give up secrets. Meanwhile, at Harvard University, Timothy Leary—encouraged by, among others, the beat poet Allen Ginsberg—gave it to artists and writers, who would then describe their experiences. When rumors spread that he was giving drugs to students, law-enforcement officials started investigating and the university warned students against taking the drug. Leary took the opportunity to preach about the drug’s power as an aid to spiritual development, and was soon sacked from Harvard, which further fueled his and the drug’s notoriety. The scandal had caught the eye of the press and soon the whole country had heard of LSD.
By 1962, Sandoz was cutting back on its distribution of LSD, the result of restrictions on experimental-drug use brought on by an altogether different drug scandal: birth defects linked to the morning-sickness drug thalidomide. Paradoxically, the restrictions coincided with an increase in LSD’s availability—the formula was not difficult or expensive to obtain, and those who were determined to synthesize it could do so with moderate difficulty and in great amounts.
Still, moral panic about its effects on young minds was rife. The authorities were also worried about LSD’s association with the counterculture movement and the spread of anti-authoritarian views. Calls for a nationwide ban soon followed, and many psychiatrists stopped using LSD as its negative reputation grew.
One of many stories in the press told of Stephen Kessler, who murdered his mother-in-law and claimed afterwards that he didn’t remember what he’d done as he was “flying on LSD.” In the trial, it emerged that he had taken LSD a month earlier, and at the time of the murder was intoxicated only with alcohol and sleeping pills, but millions believed that LSD had turned him into a killer. Another report told of college students who went blind after staring at the sun on LSD.
Two U.S. Senate subcommittees held in 1966 heard from doctors who claimed that LSD caused psychosis and “the loss of all cultural values,” as well as from LSD supporters such as Leary and Senator Robert Kennedy, whose wife Ethel was said to have undergone LSD therapy. “Perhaps to some extent we have lost sight of the fact that it can be very, very helpful in our society if used properly,” said Kennedy, challenging the Food and Drug Administration for shutting down LSD-research programs.
Possession of LSD was made illegal in the U.K. in 1966 and in the U.S. in 1968. Experimental use by researchers was still possible with licenses, but with the stigma attached to the drug’s legal status, these became extremely hard to get. Research ground to a halt, but illegal recreational use carried on.
* * *
At the age of 40, after 21 years of marriage, Friederike Meckel Fischer fell in love with another man. Sadly, as she soon discovered, he was using her to get out of his own marriage. “I had a pain within myself with this man having left me, with my husband whom I couldn’t connect to,” she says. “It was just like I was out of myself.”
Her solution was to become a psychotherapist. She says she never thought of going into therapy herself, which in 1980s West Germany was reserved for only the most serious conditions. Besides which, her upbringing taught her to do things herself rather than seek help from others.
Fischer was at the time working as an occupational physician. She recognized that many of the problems she saw in her patients were rooted in problems with their bosses, colleagues, or families. “I came to the conclusion that everything they were having trouble with was connected to relationship issues,” she says.
A former professor of hers recommended she try a technique called holotropic breathwork. Developed by Stanislav Grof, one of the pioneers of LSD psychotherapy, this is a way to induce altered states of consciousness through accelerated and deeper breathing, like hyperventilation. Grof had developed holotropic breathwork in response to bans on LSD use around the world.
Over three years, traveling back and forth to the U.S. on holidays, Fischer underwent training with Grof as a holotropic breathwork facilitator. At the end of it, Grof encouraged her to try psychedelics.
She says she felt herself lifted by a wave and thrown onto a white beach, able to access parts of her psyche that were off-limits before.
In the last seminar, a colleague gave her two little blue pills as a gift. When she got back to Germany, Fischer shared one of the blue pills with her friend Konrad, who later became her husband. She says she felt herself lifted by a wave and thrown onto a white beach, able to access parts of her psyche that were off-limits before. “The first experience was breathtaking for me,” she says. “I only thought: ‘That’s it. I can see things.’ And I started feeling. That was, for me, unbelievable.”
The pills were MDMA, a drug that had entered the spotlight in 1976 when the American chemist Alexander ‘Sasha’ Shulgin rediscovered it, 62 years after it was patented by Merck and then forgotten. In a story echoing that of LSD’s origins, Shulgin noted feelings of “pure euphoria” and “solid inner strength” upon taking it, and felt he could “talk about deep or personal subjects with special clarity.” He introduced it to his friend Leo Zeff, a retired psychotherapist who had worked with LSD and believed the obligation to help patients took priority over the law. Zeff had continued to work with LSD secretly after its prohibition.
MDMA’s potential brought Zeff out of retirement. He travelled around the U.S. and Europe to instruct therapists on MDMA therapy. He called it “Adam” because it put the patient in a primordial state of innocence, but at the same time, it had acquired another name in nightclubs: ecstasy.
MDMA was made illegal in the U.K. by a 1977 ruling that put the entire chemical family in the most tightly controlled category: class A. In the U.S., the Drug Enforcement Administration (DEA), set up by Richard Nixon in 1973, declared a temporary ban in 1985. At a hearing to decide its permanent status, the judge recommended that it should be placed in schedule three, which would allow it to be used by therapists. But the DEA overruled the judge’s decision and put MDMA in schedule one, the most restrictive category. Under American influence, the U.N. Commission on Narcotic Drugs gave MDMA a similar classification under international law (though an expert committee formed by the World Health Organization argued that such severe restrictions were not warranted).
Schedule-one substances are permitted to be used in research under the U.N. Convention on Psychotropic Substances. In Britain and the U.S., researchers and their institutions must apply for special licenses, but these are expensive to obtain, and finding manufacturers who will supply controlled drugs is difficult.
But in Switzerland, which at the time was not a signatory to the convention, a small group of psychiatrists persuaded the government to permit the use of LSD and MDMA in therapy. From 1985 until the mid-1990s, licensed therapists were permitted to give the drugs to any patients, to train other therapists in using the drugs, and to take them themselves, with little oversight.
Believing that MDMA might help her gain a deeper understanding of her own problems, Fischer applied for a place on a “psycholytic therapy” course in Switzerland. In 1992, she and Konrad were accepted into a training group run by a licensed therapist named Samuel Widmer.
The course took place on weekends every three months at Widmer’s house in Solothurn, a town west of Zurich. Central to the training was taking the substances a number of times, 12 altogether, to get to know their effects and go through a process of self-exploration. Fischer says the drug experiences showed her how her whole life had been colored by the loss of her father at the age of 5 and the hardship of growing up in postwar West Germany.
“I can detect relations, interconnections between things that I couldn’t see before,” she says of her experiences with MDMA. “I could look at difficult experiences in my life without getting right away thrown into them again. I could, for example, see a traumatic experience but not connect to the horrible feeling of the moment. I knew it was a horrible thing, and I could feel that I have had fear, but I didn’t feel the fear.”
* * *
People on psychedelic highs often speak of profound, spiritual experiences. Back in the 1960s, Walter Pahnke, a student of Timothy Leary, conducted a notorious experiment at Boston University’s Marsh Chapel showing that psychedelics could induce these.
He gave 10 volunteers a large dose of psilocybin—the active ingredient in magic mushrooms—and 10 an active placebo, nicotinic acid, which caused a tingling sensation but no mental effects. Eight of the psilocybin group had spiritual experiences, compared with one of the placebo group. In later studies, researchers have identified core characteristics of such experiences, including ineffability, the inability to put it into words; paradoxicality, the belief that contradictory things are true at the same time; and feeling more connected to other people or things.
“When the experience can be really useful is when they feel a connection even with someone who has caused them hurt, and an understanding of what may have caused them to behave in the way they did,” says Robin Carhart-Harris, a psychedelics researcher at Imperial College London. “I think the power to achieve those kinds of realizations really speaks to the incredible value of psychedelics and captures why they can be so effective and valuable in therapy. I think that can only really happen when defenses dissolve away. Defenses get in the way of those realizations.”
“They feel a connection with someone who caused them hurt, and an understanding of what may have caused them to behave that way.”
He compares the feeling of connection with things beyond oneself to the “overview effect” felt by astronauts when they look back on the Earth. “All of a sudden they think, ‘How silly of me and people in general to have conflict and silly little hang-ups that we think are massive and important.’ When you’re up in space looking down on the entirety of the Earth, it puts it into perspective. I think a similar kind of overview is engendered by psychedelics.”
Carhart-Harris is conducting the first clinical trial to study psilocybin as a treatment for depression. He is one of a few researchers across the world who are pushing ahead with research on psychedelic therapy. Twelve people have taken part in his study so far.
They begin with a brain scan, and a long preparation session with the psychiatrists. On the therapy day, they arrive at 9 a.m., complete a questionnaire, and have tests to make sure they haven’t taken other drugs. The therapy room has been decorated with drapes, ornaments, colored glowing lights, electric candles, and an aromatizer. A Ph.D. student, who is also a musician, has prepared a playlist, which the patient can listen to either through headphones or from high-quality speakers in the room. They spend most of the session lying on a bed, exploring their thoughts. Two psychiatrists sit with them, and interact when the patient wants to talk. The patients have two therapy sessions: one with a low dose, then one with a high dose. Afterwards, they have a follow-up session to help them integrate their experiences and cultivate healthier ways of thinking.
I meet Kirk, one of the participants, two months after his high-dose session. Kirk had been depressed, particularly since his mother’s death three years ago. He experienced entrenched thought patterns, like going round and round on a racetrack of negative thoughts, he says. “I wasn’t as motivated, I wasn’t doing as much, I wasn’t exercising any more, I wasn’t as social, I was having anxiety quite a bit. It just deteriorated. I got to the point where I felt pretty hopeless. It didn’t match really what was going on in my life. I had a lot of good things going on in my life. I’m employed, I’ve got a job, I’ve got family, but really it was like a quagmire that you sink into.”
At the peak of the drug experience, Kirk was deeply affected by the music. He surrendered himself to it and felt overcome with awe. When the music was sad, he would think of his mother, who had been ill for many years before her death. “I used to go to the hospital and see her, and a lot of the time she’d be asleep, so I wouldn’t wake her up; I’d just sit on the bed. And she’d be aware I was there and wake up. It was a very loving feeling. Quite intensely I went through that moment. I think that was quite good in a way. I think it helped to let go.”
During the therapy sessions, there were moments of anxiety as the drug’s effects started to take hold, when Kirk felt cold and became preoccupied with his breathing. But he was reassured by the therapists, and the discomfort passed. He saw bright colors, “like being at the fun fair”, and felt vibrations permeate his body. At one point, he saw the Hindu elephant god Ganesh look in at him, as if checking on a child.
Although the experience had been affecting, he noticed little improvement in his mood in the first 10 days afterwards. Then, while out shopping with friends on a Sunday morning, he felt an upheaval. “I feel like there’s space around me. It felt like when my mum was still alive, when I first met my partner, and everything was kind of OK, and it was so noticeable because I hadn’t had it in a while.”
There have been ups and downs since, but overall, he feels much more optimistic. “I haven’t got that negativity any more. I’m being more social; I’m doing stuff. That kind of heaviness, that suppressed feeling has gone, which is amazing, really. It’s lifted a heavy cloak off me.”
Another participant, Michael, had been battling depression for 30 years, and tried almost every treatment available. Before taking part in the trial, he had practically given up hope. Since the day of his first dose of psilocybin, he has felt completely different. “I couldn’t believe how much it had changed so quickly,” he says. “My approach to life, my attitude, my way of looking at the world, just everything, within a day.”
“That kind of heaviness, that suppressed feeling has gone. It’s lifted a heavy cloak off me.”
One of the most valuable parts of the experience helped him to overcome a deep-rooted fear of death. “I felt like I was being shown what happens after that, like an afterlife,” he says. “I’m not a religious person and I’d be hard pushed to say I was anything near spiritual either, but I felt like I’d experienced some of that, and experienced the feeling of an afterlife, like a preview almost, and I felt totally calm, totally relaxed, totally at peace. So that when that time comes for me, I will have no fear of it at all.”
* * *
During her training with Samuel Widmer, Fischer also worked in an addiction clinic. The insights from her drug experiences gave her new empathy. “All of a sudden I could understand my clients in the clinic with their alcohol addiction,” she says. “They were coping differently than I did. They had almost the same problems or symptoms I had, only I hadn’t started drinking.” But only a few of them were able to open up about how those experiences made them feel. She wondered: Could an MDMA experience help them release those emotions?
MDMA is a tamer relative of the classic psychedelics— psilocybin, LSD, mescaline, DMT. They have effects that can be disturbing, like sensory distortions, the dissolution of one’s sense of self, and the vivid reliving of frightening memories. MDMA’s effects are shorter-lasting, making it easier to handle in a psychotherapy session.
Fischer opened her own private psychedelic-therapy practice in Zurich in 1997. During the next few years, she began hosting weekend group-therapy sessions with psychedelics in her home, inviting clients who had failed to make progress in conventional talk therapy.
Since the 1950s, psychiatrists have recognized the importance of context in determining what sort of experience the LSD taker would have. They have emphasised the importance of “set”—the user’s mindset, their beliefs, expectations, and experience—and “setting”—the physical milieu where the drug is taken, the sounds and features of the environment and the other people present.
A supportive setting and an experienced therapist can lower the risk of a bad trip, but frightening experiences still happen. According to Fischer, they are part of the therapeutic experience. “If a client is able to go through or lets himself be led through and work through, the bad trip turns into the most important step on the way to himself,” she says. “But without a correct setting, without a therapist who knows what he’s doing and without the commitment of the client, we end up in a bad trip.”
Her clients would come to her house on a Friday evening, talk about their recent issues, and discuss what they wanted to achieve in the drug session. On Saturday morning, they would sit in a circle on mats, make the promise of secrecy, and each take a personal dose of MDMA agreed with Fischer in advance. Fischer would start with silence, then play music, and speak to the clients individually or as a group to work through their issues. Sometimes she would ask other members of the group to assume the role of a client’s family members, and have them discuss problems in their relationship. In the afternoon they would do the same with LSD, which would often let the participants feel as though they were reliving traumatic memories. Friederike would guide them through the experience, and help them understand it in a new way. On Sunday, they would discuss the experiences of the previous day and how to integrate them into their lives.
Fischer’s practice, however, was illegal. Therapeutic licenses to use the drugs had been withdrawn by the Swiss government around 1993, following the death of a patient in France under the effect of ibogaine, another psychotropic drug. (It was later determined that she died from an undiagnosed heart condition.)
* * *
The early LSD researchers had no way to look at what it was doing inside the brain. Now we have brain scans. Robin Carhart-Harris has carried out such studies with psilocybin, LSD, and MDMA. He tells me there are two basic principles of how the classic psychedelics work. The first is disintegration: The parts that make up different networks in the brain become less cohesive. The second is desegregation: The systems that specialize for particular functions as the brain develops become, in his words, “less different” from each other.
These effects go some way to explaining how psychedelics could be therapeutically useful. Certain disorders, such as depression and addiction, are associated with characteristic patterns of brain activity that are difficult to break out of. “The brain kind of enters these patterns, pathological patterns, and the patterns can become entrenched. The brain easily gravitates into these patterns and gets stuck in them. They are like whirlpools, and the mind gets sucked into these whirlpools and gets stuck.”
Psychedelics dissolve patterns and organization, introducing “a kind of chaos,” says Carhart-Harris. On the one hand, chaos can be seen as a bad thing, linked with things like psychosis, a kind of “storm in the mind,” as he puts it. But you could also view that chaos as having therapeutic value. “The storm could come and wash away some of the pathological patterns and entrenched patterns that have formed and underlie the disorder. Psychedelics seem to have the potential through this effect on the brain to dissolve or disintegrate pathologically entrenched patterns of brain activity.”
The therapeutic potential suggested by Carhart-Harris’s brain-scan studies persuaded the U.K.’s Medical Research Council to fund the psilocybin trial for depression. It’s too early to evaluate its success, but the results so far have been encouraging. “Some patients are in remission now months after having had their treatment,” Carhart-Harris says. “Previously their depressions were very severe, so I think those cases can be considered transformations. I’m not sure if there are any other treatments out there that really have that potential to transform a patient’s situation after just two treatment sessions.”
* * *
In the wake of MDMA’s prohibition, the American psychologist Rick Doblin founded the Multidisciplinary Association for Psychedelic Studies (MAPS) to support research aiming to reestablish psychedelics’ place in medicine. When the Swiss psychiatrist Peter Oehen heard they were funding a study on using MDMA to help people with post-traumatic stress disorder (PTSD), he jumped on a plane to meet Doblin in Boston.
Like Friederike, Oehen trained in psychedelic therapy while it was legal in Switzerland in the early 1990s. Doblin agreed to support a small study with 12 patients at Oehen’s private practice in Biberist, a small town about half an hour by train from the Swiss capital, Bern.
Oehen thinks that MDMA’s mood-elevating, fear-reducing, and pro-social effects make it a promising tool to facilitate psychotherapy for PTSD. “Many of these traumatized people have been traumatized by some kind of interpersonal violence and have lost their ability to connect, are distrustful, are aloof,” Oehen says. “This helps them regain trust. It helps build a sound and trustful therapeutic relationship.” It also puts the patient in a state of mind where they can face their traumatic memories without becoming distressed, he says, helping to start reprocess the trauma in a different way.
Psychedelics dissolve patterns and organization, introducing “a kind of chaos.”
When MAPS’s first PTSD study in the U.S. was published in 2011, the results were eye-opening. After two psychotherapy sessions with MDMA, 10 out of 12 participants no longer met the criteria for PTSD. The benefits were still apparent when the patients were followed up three to four years after the therapy.
Oehen’s results were less dramatic, but all of the patients who had MDMA-assisted therapy felt some improvement. “I’m still in touch with almost half of the people,” he says. “I can see still people getting better after years going on in the process and resolving their problems. We saw this at long-term follow-up, that symptoms get better after time, because the experiences enable them to get better in a different way to normal psychotherapy. These effects—being more open, being more calm, more willing to face difficult issues—this goes on.”
In people with PTSD, the amygdala, a primitive part of the brain that orchestrates fear responses, is overactive. The prefrontal cortex, a more sophisticated part of the brain that allows rational thoughts to override fear, is underactive. Brain-imaging studies with healthy volunteers have shown that MDMA has the opposite effects—boosting the prefrontal cortex response and shrinking the amygdala response.
Ben Sessa, a psychiatrist working around Bristol in the U.K., is preparing to carry out a study at Cardiff University testing whether people with PTSD respond to MDMA in the same way. He believes that early negative experiences lie at the root not just of PTSD but of many other psychiatric disorders too, and that psychedelics give patients the ability to reprocess those memories.
“I’ve been doing psychiatry for almost 20 years now and every single one of my patients has a history of trauma,” he says. “Maltreatment of children is the cause of mental illness, in my opinion. Once a person’s personality has been formed in childhood and adolescence and into early adulthood, it’s very difficult to encourage a patient to think otherwise.” What psychedelics do, more than any other treatment, he says, is offer an opportunity to “press the reset button” and give the patient a new experience of a personal narrative.
Sessa is planning a separate study to test MDMA as a treatment for alcohol-dependency syndrome—picking up the trail of Humphrey Osmond’s LSD research 60 years ago.
He believes psychiatry would look very different today if research with psychedelics had proceeded unencumbered since the 1950s. Psychiatrists have since turned to antidepressants, mood stabilizers, and antipsychotics. These drugs, he says, help to manage a patient’s condition, but aren’t curative, and also carry dangerous side-effects.
“We’ve become so used to psychiatry being a palliative care field of medicine,” Sessa says. “That we’re with you for life. You come to us in your early 20s with severe anxiety disorder; I’ll still be looking after you in your 70s. We’ve become used to that. And I think we’re selling our patients short.”
Will psychedelic drugs ever be ruled legal medicines again? MAPS are supporting trials of MDMA-assisted psychotherapy for PTSD in the U.S., Australia, Canada, and Israel, and they hope they will have enough evidence to convince regulators to approve it by 2021. Meanwhile, trials using psilocybin to treat anxiety in people with cancer have been taking place at Johns Hopkins University and New York University since 2007.
Few psychiatrists I asked about the legal use of psychedelics in therapy would give their opinions. One of the few who did, Falk Kiefer, the medical director at the Department of Addictive Behavior and Addiction Medicine at the Central Institute of Mental Health in Mannheim, Germany, says he is skeptical about the drugs’ ability to change patients’ behavior. “Psychedelic treatment might result in gaining new insights, ‘seeing the world in a different way.’ That’s fine, but if it does not result in learning new strategies to deal with your real world, the clinical outcome will be limited.”
Carhart-Harris says the only way to change people’s minds is for the science to be so good that funders and regulators can’t ignore it. “The idea is that we can present data that really becomes irrefutable, so that those authorities that have reservations, we can start changing their perspective and bring them around to taking this seriously.”
* * *
After 13 days under arrest, Fischer was released. She appeared in court in July 2010, accused of violating the narcotics law and endangering her clients, the latter of which could mean up to 20 years’ imprisonment. A number of neuroscientists and psychotherapists testified in her defense, arguing that one portion of LSD is not a dangerous substance and has no significant harmful effects when taken in a controlled setting (MDMA was not included in the prosecution’s case).
The judge accepted that Fischer had given her clients drugs as part of a therapeutic framework, with careful consideration for their health and welfare, and ruled her guilty of handing out LSD but not guilty of endangering people. For the narcotics offense, she was fined 2,000 Swiss francs and given a 16-month suspended sentence with two years’ probation.
“I have been blessed by a very understanding lawyer and an intelligent judge,” she says. She even considers the woman who reported her to the police a blessing, since the case has allowed her to talk openly about her work with psychedelics. She gives occasional lectures at psychedelic conferences, and has written a book about her experience, which she hopes will guide other therapists in how to work with the substances safely.
More than anything else, late eighteenth century European fashion is famous for ludicrous wigs. Sky-high, powdered, stuffed with ribbons and flowers and whatever trendy, topical confection they could shove in there. And now you can create your own!
When Velveth Monterroso arrived in the U.S. from her hometown in Guatemala, she weighed exactly 140 pounds. But after a decade of living in Oklahoma, she was more than 70 pounds heavier and fighting diabetes at the age of 34. This friendly woman, a mother of two children, is a living embodiment of the obesity culture cursing the world’s wealthiest country. “In Guatemala it is rare to see people who are very overweight, but it could not be more different here,” she said. “I saw this when I came here.”
As soon as she arrived in the U.S. she started piling on pounds—an average of seven each year. In Guatemala, she ate lots of vegetables because meat was expensive. But working from 8 o'clock in the morning until 11 at night as a cook in an Oklahoma City diner, she would skip breakfast and lunch while snacking all day on bits of burger and pizza. Driving home, she would often resort to fast food because she was hungry and exhausted after a 15-hour day slaving over a hot grill. If she and her husband Diego—also a cook—made it back without stopping, they would often gorge on whatever was available rather than wait to cook a decent meal.
Her lifestyle was no healthier when she stopped working after having her second child eight months ago. She was tired and her family encouraged her to drink lots of atole—a heavily sweetened corn-based drink popular in central America—to aid the breastfeeding of her new daughter, Susie. Sugar levels in her body soared, and on top of her obesity she became pre-diabetic.
Velveth’s life was changed—and probably ultimately saved—when she took Susie for a medical check-up and was enrolled in a program to curb obesity. Now she eats fast food just once a week, cooks more vegetables, has cut down the number of tortillas consumed at meals and exercises daily by walking up and down stairs for 20 minutes. Although still overweight, in just four months she has lost 16 of those pounds gained in America. “All my friends are impressed,” she told me with a smile. “I feel like I have so much more energy now. I can do the shopping and laundry, bathe the baby, and I’m not nearly so tired as before.”
Velveth is one beneficiary of a remarkable attempt to tackle obesity: Oklahoma City has declared war on fat. First the mayor—realizing he had become clinically obese just as his hometown was identified by a magazine as one of America’s most overweight cities—challenged his citizens to collectively lose a million pounds. But hitting that target was just the start: This veteran Republican politician then took on the car culture that shaped his nation and asked citizens to back a tax rise to fund a redesign of the state capital around people.
This unleashed an incredible range of initiatives, including the creation of parks, sidewalks, bike lanes, and landscaped walking trails across the city. Every school is getting a gym. With the new emphasis on exercise, city officials spent $100 million creating the world’s finest rowing and kayaking centre in a Midwest town with no tradition of the sport beforehand. Overweight people are targeted at home and at work to alter their lifestyles, while data are used to discover the districts with the worst health outcomes so that resources can be poured in to change behavior.
The experiment is unusual in terms of its ambition, breadth and cost, all of which take it beyond anything being attempted by other American cities in the fight against fat. The battle is being done with, rather than against, the fast-food industry and soda manufacturers, relying largely on persuasion instead of coercion through soda bans and sugar taxes. The city has been dubbed “a laboratory for healthy living.” Yet what makes the experiment quite so extraordinary is that it is being attempted in Oklahoma.
The city is one of the nation’s most spread-out urban environments, covering 620 square miles, which means its 600,000 residents rely on cars; there are so many freeways they quip that “you can get a speeding ticket at rush hour.” Not only did the city not have a single bike lane, also it reputedly had the highest density of fast-food outlets in America, with 40 McDonald’s restaurants alone. It sits in a state seen as cowboy country filled with ultra-conservative Okies, symbolized by The Grapes of Wrath, John Steinbeck’s definitive 1930s novel about poor farmers driven away by drought and hardship. The economy collapsed again in the 1980s amid the energy crisis, with bank closures and another generation drifting away; then came the terrible 1995 bombing that killed 168 people.
Churches began setting up running clubs, schools discussing diet, companies holding contests to lose weight.
The man behind the transformation is Mick Cornett, a former television sportscaster who became mayor in 2004. Three years later he was flicking through a fitness magazine when he noticed his city had been given the unwanted accolade of having the worst eating habits in the U.S. and was prominent on a list of the nation’s most obese populations. This coincided with his own reluctant acceptance, after checking his personal details on a government website, that at almost 220 pounds he was obese.
“This list of obesity affected me as mayor, and when I then got on the scales it affected me personally. I have always exercised and I remember thinking that I did not eat between meals, yet I was eating 3,000 calories a day. As mayor people are always wanting to meet with you, so it was not unusual to have a business breakfast, then a lunch with someone, then a function dinner. And in between there can be events with snacks and cookies.”
Cornett’s response was to start losing weight by watching what he ate; today he is almost 40 pounds lighter. But he also began to think about the issue, wondering why America was ignoring such a massive problem. His eventual conclusion was that this was because no one had any real solutions to the crisis. At the same time, the mayor began to look afresh at the culture and infrastructure of his city, realizing how the extent of reliance on cars had alienated human beings from enjoying and using their own urban environments.
His first step was to challenge citizens to join him on a diet. Using his flair for publicity after 20 years in television, he announced that he wanted Oklahoma City to lose one million pounds. He made the annoucement standing in front of the elephant enclosure at the local zoo on New Year’s Eve, aware of the media focus on diets in the days after the festive excess. He persuaded a health-care magnate to fund an information website called This City Is Going On A Diet—and was relieved over the following days as local papers backed his campaign and the national media praised it.
Churches began setting up running clubs, schools discussing diet, companies holding contests to lose weight; chefs in restaurants competed to offer healthy meals. More importantly for the mayor, people across the city began discussing a crisis spiraling out of control. Almost one-third of adult Oklahomans are obese, while the state ranks among the worst in fruit consumption and has one of the lowest life expectancies in America. Diabetes rates nearly doubled in a decade. Perhaps most alarmingly, more than one in five children aged 10 to 17 suffer from obesity and almost one-third of pre-school infants are overweight.
Ashley Weedn, the medical director of a specialist child-obesity clinic that opened three years ago in Oklahoma City, told me they were seeing ‘incredible’ cases of four-year-olds with high cholesterol and children consuming five times the daily sugar allowance in soft drinks alone. “We are even coming across kids with joint problems usually associated with much older people because of the strain on their legs, which we are seeing as early as six. This can involve surgery because of the pressure on bones leading to abnormal growth, which can lead to misshapen limbs.”
Despite some flak from doctors, Cornett decided from the start to work with the food and drink industry. So the soda sector sponsors health programs to fight obesity, and the mayor even posed with the boss of Taco Bell in one of the chain’s outlets to publicize a low-fat menu; indeed, he keeps one of the company’s promotional cut-outs in his office and proudly showed it to me when we met. “Even when I lost weight I would go to a fast-food place, although I might have a bean burrito without sour cream,” he told me. “I could not stop people going to them, but I could try to make them more discerning with their orders. You can’t totally change people’s habits.”
In January 2012 the city hit the mayor’s million-pound target—47,000 people had signed up, losing on average more than 20 pounds apiece. An admirable achievement, with the campaign proving a clever way to raise awareness. But for all the publicity, Cornett’s ambitions had grown way beyond that original simple stunt: Now he wanted to remake his huge metropolis by remolding it around people in place of cars. Or as he explained it, “putting the community back in the community.” Yet although these days hailed as an urban visionary, he readily admits there was no “grand plan” at the outset.
Oklahoma City has been a sprawling place since the day it was founded with a land grab in 1889, when thousands of settlers raced from a gunshot to stake out their land. Like most U.S. cities, it is criss-crossed with thunderous multi-lane freeways and developed around the car. Pedestrians and cyclists were largely ignored, with few pavements and no bike lanes. When Cornett began the first of his record-breaking four terms as mayor in 2004 the city was still emerging from the economic collapse of the 1980s; he was lucky to inherit the legacy of a predecessor who understood the need to create a nicer living environment to attract families and professionals, and who did so by building a new canal and sports arenas.
He was partly spurred into action by another of those lists loved by magazines, when his hometown was labelled worst for walking in the country. Cornett contacted a planning expert named Jeff Speck, who conducted a survey of the city that concluded it had twice as many car lanes as needed. The result was the dismantling of its one-way system, seen as encouraging faster driving, along with the start of a project to install hundreds of miles of pavements, parks, trees, bike lanes, sports facilities, and on-street parking to create a “steel barrier” between those thundering freeways and pedestrians.
“Obesity is the underlying cause of almost every chronic condition we have in Oklahoma.”
The scale is impressive. The city’s downtown is being rebuilt, while next up is the creation of a 70-acre central park, since studies show people exercise more if close to green spaces. “The American health-care crisis is an urban design problem,” argues Speck, author of a book called Walkable City. “The lack of attention to such issues has been a huge black hole. Data shows that physical health and obesity are tied much more to physical exercise than to diet. But what makes Oklahoma unique is their willingness to invest so generously, for which they must be commended.”
Cornett estimates about $3 billion has come from public funds, with up to five times that sum spent by the private sector riding his city’s renaissance. There was, for example, just one struggling hotel downtown at the turn of the century; today there are 15, and it was difficult to find a room at short notice. Remarkably, residents voted to pay for this redevelopment with a 1 cent rise on the local sales tax, which raises about $100 million a year; other funds have been taken from tobacco settlements and rising income from property taxes as firms and people are attracted back. Oklahoma City currently has among the lowest unemployment in the country, which blows away the dusty Grapes of Wrath clichés.
The most unexpected part of the makeover can be found a few minutes’ walk from the city’s entertainment district of Bricktown, where one of the world’s finest rowing facilities has been created in the heart of the Midwest. This is a city that even the mayor’s chief of staff says was a “horrible” place when growing up. Yet what was once a dried-up river in a dilapidated ditch best avoided by decent folks at night is now a sparkling 3-mile stretch of water, fringed with lush landscaping, futuristic-looking boathouses, bike lanes and floodlights.
According to Shaun Caven, a 47-year-old Scotsman who led the gold-medal-winning British canoe and kayak team at the 2008 Olympics before moving to be head coach at the Oklahoma City Boathouse, this will be the best set-up in the world upon the completion of its $45m white-water course. There are even altitude-training facilities in one of those high-tech boathouses. “People thought I was mad when I moved here—they said there’s no water, since the impression is of a bone-dry landscape,” said Caven. “But I liked the fact there was no history and the chance to start something from nothing.”
The river feels a long way from rowing’s upper-crust heritage: People on paddleboards and school parties on dragon boats share the water with U.S. Olympic teams in training under the searing sun. Efforts are made to attract people from across society: 50 firms have joined a corporate rowing league, while eight local high schools have their own boats. Among those I met there was Bob Checorski, a 76-year-old sweating from his exertions after rowing an impressive 11,000 meters, who told me he began six years ago after losing his free gym membership at work. “I do it for relaxation rather than racing—although I did win a silver medal in a doubles race soon after joining, with a guy who’d had open-heart surgery,” he said. “Now I just go out and enjoy myself.”
But plush sports facilities, nice parks and pleasant sidewalks can only go so far in fighting a culture of rampant obesity; many people need encouragement, help and even prodding to alter lethal lifestyles. And Oklahoma has some of the highest mortality rates in the U.S. So six years ago the city started poring over all available data to find its least healthy zip codes, discovering that some disadvantaged parts suffer five times as many deaths from strokes and cardiovascular conditions as wealthier areas. This led to the redirection of funds to places most in need.
Many experts compare this struggle to the anti-smoking movement.
“Obesity is the underlying cause of almost every chronic condition we have in Oklahoma,” said Alicia Meadows, the director of Planning and Development at Oklahoma City-County Health Department. “If you direct significant resources into areas of greatest health inequalities, we think you make the biggest difference.” They have an eight-strong team of outreach staff going to markets, sports events and even calling door-to-door in areas where data indicate people are in need of the most help. “We make it clear we don’t want to see their papers; we know many are undocumented. But their health impacts on the city’s health.”
These outreach officials come from the same communities they seek to change. One is a mother-of-two from an impoverished Mexican background, who told me she used to know nothing about nutrition; now she has lost 70 pounds and taken up kickboxing. I watched Dontae Sewell, another convert, lead a “Total Wellness” class in a library, making self-deprecating jokes about scoffing burgers at barbeques as he explained the etiquette of healthy eating. “If your friends love you, they still gonna visit even if you just serve them vegetables,” he declared.
The lesson was good-spirited, with lots of banter and little homilies alongside advice on when, what, and where to eat. The class of 22 women and one man, mostly overweight and some clearly obese, had lost 200 pounds between them in five weeks. “We want to see our grandkids,” one middle-aged mother told me afterwards. Sewell, with a chunky silver cross around his neck, asked how many of the class ate at the table; just four raised their hands. Then he asked how many fast-food outlets they passed on their way home from work. “Two dozen,” replied one woman. “Too many,” said another, laughing. “Don’t be too hard on yourselves,” said Sewell. “It’s about small changes and creating new habits.” Afterwards he confessed only about one-third stuck long term to their lifestyle changes.
The city has also built specialist “Wellness Campuses” in its worst-afflicted areas, the first in a low-income, largely African-American area to the north-east of the city. The slick new building—filled with medical clinics, communal meeting rooms and kitchens for cooking demonstrations—sits in verdant grounds dotted with walking and bike trails. Patients at the private–public partnership can see specialists in everything from nutrition to domestic violence, taking home prescriptions for food boxes and soon even for running shoes and vests. The local soccer team is building its training ground beside the campus to encourage participation in sport.
There is no doubt Oklahoma City and its fat-fighting mayor deserve credit for their war against obesity, an inspiration to a country in which over two-thirds of the adult population are overweight and which has such a strong car culture. At the very least they have made their hometown a more pleasant place to live—so important given the struggle between cities for jobs and young professionals. Yet the key question is whether even such valiant and wide-ranging efforts can dent such a huge health problem, one needlessly killing so many people on the planet. After all, one Lancet study looking into three decades of global obesity found that not one of the 188 nations studied had managed to turn the tide on this crisis, which grows worse by the day.
There are signs of success, although Cornett is not making big claims. “All I will say is that my impression is we are going in the right direction.” He is skeptical about data on obesity, but health indicators seem to back him. In the lowest-income areas, which have the highest rates of diabetes and blood pressure problems along with the worst outcomes, they have cut key indicators by between 2 and 10 percent in five years. Although Oklahoma men live almost six years less than the national average, the city has seen a 3 percent fall in mortality rates. Yet for all this, the rise in obesity has slowed—down from 6 percent a year to 1 percent—but it is sadly still increasing.
No wonder many experts compare this struggle to the anti-smoking movement, which took several decades of campaigning, education and regulation to change societal behavior. This was underlined to me the night before leaving Oklahoma City as I ate in a restaurant recommended by Cornett’s office. After a superb plate of pasta, I was offered dessert and chose a “roasted-pecan ice-cream ball ... smothered in chocolate sauce.” The waiter said that was a good choice, then asked if I wanted it “volleyball, softball, or baseball sized.” I went for the smallest; it was delicious and absurdly filling. But a posh restaurant offering volleyball-sized portions of ice cream? As Cornett says, it is hard to change habits in the battle against obesity.
Those pondering the longevity of their relationships can rely on something other than the opinions of friends—they can look at their credit scores.
A new working paper published by the U.S. Federal Reserve Board finds that the higher someone’s credit score is, the higher his or her chances of a lasting relationship will be.
A trio of economists parsed data from the Fed’s consumer-credit panel to identify the credit scores of couples in committed relationships. People tend to form committed relationships with people whose credit scores are in the same range, the study found. And couples with high credit scores tend to stay together longer.
The credit scores were measured by Equifax, and indicate individuals’ creditworthiness on a scale from 280 to 850. The economists were able to track the relationships and credit scores on a quarterly basis. They identified “committed relationships” by noticing when two individuals who previously had not shared addresses began to do so, and continued living together for at least a year. The researchers said they applied a few other unspecified restrictions to ensure that most of the couples identified were indeed committed partners—though they note that they couldn’t distinguish between married and non-married or “cohabiting” ones, nor did they much care about this distinction.
“We are interested in the implications of credit scores and the associated match quality in a general swath of committed relationships, not just the couples who are legally married,” the researchers wrote, adding that cohabiting couples increasingly “share many household economic and financial responsibilities in a way similar to married couples.”
For every additional 100 points or so in a couple’s average credit score at the beginning of their relationship, their odds of separating during the second year of the relationship drop by 30 percent, the researchers found. Also, if the difference between a couple’s individual credit scores is greater than 66 points at the start of the relationship, the couple is 24 percent more likely to split up within the second, third, or fourth year of the relationship. The study also noted that a pair’s credit scores are likely to converge slightly over the course of a relationship.
The link between credit scores and relationship longevity probably has to do with creditworthiness being a proxy for “an individual’s general trustworthiness and commitment to non-debt obligations,” the study notes. Those characteristics affect all sorts of things involved in sharing a household—who takes out the trash, for example, and who’s more likely to forget a birthday or anniversary.
Interviews of more than 50 people by The New York Times in 2012 revealed similar views; some had discussed credit scores on first dates, and others had found dates on websites such as datemycreditscore.com. And a Citigroup survey conducted in 2015 found that 78 percent of Americans would rather have a financially-savvy partner than a physically attractive one.
When Joss Whedon’s classic show Buffy the Vampire Slayer went off the air in 2003, its cult status was still very much nascent. Cue the novels, comics, video games, and spinoffs, not to mention fan sites, fan fiction, conventions, and inclusion on scores of “Best TV Shows of All Time” lists. But while it remains good fun to watch a seemingly ditzy teenager and her friends fight the forces of darkness with super-strength, magic, and witty banter, the show’s seven seasons have also become the subject of critical inquiry from a more intellectually rigorous fanbase: academics.
Buffy, along with critically acclaimed series like The X-Files and Twin Peaks, came before The Sopranos and the beginning of the Golden Age of Television, but helped pave the way for scholars to treat television shows like The Wire, Mad Men, and Breaking Bad as sprawling works of art to be dissected and analyzed alongside the greatest works of literature. Academics have found Whedon’s cult classic to be particularly multi-dimensional—trading heavily on allegory, myth, and cultural references—while combining an inventive narrative structure with dynamic characters and social commentary.
Douglas Kellner, a professor at UCLA, has written that popular television does a particularly good job of expressing the subconscious fears and fantasies of a society, and that Buffy is an especially useful example. The show’s fantastical elements, he said, provide “access to social problems and issues and hopes and anxieties that are often not articulated in more ‘realist’ cultural forms,” like cop shows or sitcoms. But even popular dramas with similar surface-level conceits like Teen Wolf and Vampire Diaries, which focus mostly on soap-opera romance and teen issues, lack Buffy’s allegorical elements, which elevate the show and make it fascinating for scholars to study.
In Buffy, monsters act as physical stand-ins for societal differences and threats: Vampires symbolize sexual predators, werewolves represent bodily forces out of control, and witches tap into tropes about how female power and sexuality is seen as threatening. By fighting the “Big Bad,” Buffy and her friends fight the monsters everyone faces—oppressive authority figures, meaningless rules, confining social norms, sexual awakening, loneliness, redemption—in other words, the terrors of growing up and finding one’s way in the world.
“Whedon seems to be an almost inexhaustible source,” said David Lavery, an English professor at Middle Tennessee State University who teaches courses on Mad Men, Doctor Who, and Lost as well as Buffy, and co-founded the Whedon Studies Association, an academic organization devoted to analyzing the works of the eponymous writer, producer, and director. “There’s the complexity, intertextuality, authenticity of his stories that makes them so rich for study. If he keeps making stuff for the next 10 years, I think Whedon studies will be going on for quite a long time.”
By fighting the “Big Bad,” Buffy and her friends fight the monsters everyone faces.
Even though it helped set the stage for prestige shows like Mad Men to be studied in an academic context, Buffy lacks some of the same gravitas those series do. The New Yorker critic Emily Nussbaum has lamented that Buffy doesn’t look the way “worthy” television should look, which has made it difficult for her to convince friends and peers of its quality. (In early seasons, she noted, “the werewolf costume looked like it was my great-aunt Ida’s coat.”) Still, Buffy’s sometimes Dr. Who-esque campiness itself has merited critical essays. Meanwhile, other scholars have unpacked the complex relationship Joss Whedon has to his universes, examining him as an auteur on par with show creators such as Vince Gilligan, Matthew Weiner, and Shonda Rhimes.
Beyond Buffy, the field of popular-culture studies is rising in universities across the country. Students are critiquing Madonna, Jay-Z, and Harry Potter, as well as The Sopranos, The Wire, and Lost. These scholars—many of whom are fans of the works they study—sometimes brush up against an academic culture that looks down upon their texts of choice, despite television’s formal and thematic similarities to other well-established areas of study.
But throughout history, yesterday’s lowbrow is often tomorrow’s cultural classic. Rhonda Wilcox, who also co-founded the Whedon Studies Association, frequently compares the episodic format of television to 19th century serialization of novels, like those of Charles Dickens. Dickens, as well as Shakespeare, was considered “pop culture” and thus unworthy of study by close-minded academics who maintained that epic poetry was the most legitimate text. Literary studies and film studies as they’re known today both underwent similar battles for legitimacy that television studies is currently facing. “I think that we’re slowly getting people to recognize that television studies needs to be taken seriously. It’s a general prejudice because it’s fun,” Wilcox says.
Perhaps unsurprisingly, Whedon himself supports the rise of the discipline. In an interview withThe New York Times in 2003, he said, “I think it’s always important for academics to study popular culture, even if the thing they are studying is idiotic. If it’s successful or made a dent in culture, then it is worthy of study to find out why.”