80 Days is one of my top adventure games of all time. Really looking forward to this.
I love words, me. They’re so expressive and weird and can do so much. I think of people who are very good at word-balling as like conductors mixed with magicians. They marshall all these weird squiggles into a specific order that make other people spontaneously feel real feelings, in whatever order they want. Madness! And the origins and cross-pollinations of language, especially one as terrible as English (consider the words ‘content’ and ‘content’ for example) that you can study words specifically, which is called etymology.
For example, did you know that avocados and orchids are both basically big swinging balls? Avocado is a corruption of the Aztec word for testicle, and orchid comes from the Ancient Greek for same, orchis*. This once again goes to show that humans have evolved very little. But we have at least evolved to the point that we can make video games with an etymological flair. That gold standard segue leads me to Heaven’s Vault.
Have You Played? is an endless stream of game retrospectives. One a day, every day, perhaps for all time.
No, not that one. This one. Confusingly known in North America as Need For Speed III: Hot Pursuit, this 1998 racer was a steel-blue feast of dropdown menus infested with cars. Click on one of these three-dimensional motorbeasts to hear the silky voice of every US infomercial telling you about horsepower and gear boxes and how the Alfa Romeo Scighera “embraces dynamic new design influences.” Oh yes, that’s the good stuff. That’s some good cars right there.
Hell yeah I have. We're super due for a game made in this style. This was such a great dungeon crawler.
Have You Played? is an endless stream of game retrospectives. One a day, every day, perhaps for all time.
It’s very difficult to pick a favourite moment out of the unsightly behemoth that was Might & Magic VI: The Mandate of Heaven. It may have been when I went on a murder-hobo rampage through the starting town of New Sorpigal, killing every peasant, every guard, every trader; and then heading to the temple and donating gold until my reputation had improved from “Notorious” back to “Respectable”.
Or it may have been that fateful moment where, due to a simple misclick, I accidentally mixed one potion with another in my inventory and caused a chemical explosion that instantly killed my entire party.
weird that a magazine is running for president, but i'm not familiar with nigeria's constitution.
One evening last summer, Chike Ukaegbu, a 35-year-old New York
tech entrepreneur, called his uncle, Augustine Akalonu: “Are you sitting down?”
Ukaegbu asked. Dr. Akalonu was sitting down; he was driving home from his
pediatrics practice in the Bronx. Ukaegbu had just been named among the global
100 most influential people of African descent under forty. He had been
speaking at conferences around the country and running a lauded tech
accelerator in New York. But he wasn’t calling about any of that.
“I’m going to run for president,” Ukaegbu said, a
characteristic note of mischief in his voice.
Dr. Akalonu, a jovial man in his mid-sixties with an easy,
unhurried manner said, “Great. President of what?”
“Of Nigeria,” replied Ukaegbu.
“I nearly had a car accident,” Dr. Akalonu told me a few months
later in November, at a fundraiser he hosted in Nyack, New York. Ukaegbu had
become the youngest presidential candidate in Nigerian history.
“Gerontocracy.” That’s the word criticsuse to describe
Nigerian politics. The country has a population of almost 200 million—the
largest on the continent, and the youngest: Sixty percent of Nigerians are
under age 30, only six percent are older than 60. But government leadership
positions are overwhelmingly filled by the aged. This disparity illustrates a
broader African trend. In 2017 researchers estimated
the median age of the continent’s population was 20, while the average age of a head of
state was 62. In May 2018, however, a month after angering Nigerian youth with disparaging
comments at an event in London, President Muhammadu Buhari, 75, signed a
law lowering the minimum age for presidential candidates from 40 to 35.
Ukaegbu came to the U.S. in 2002 at the age of 19 to study
biomedical engineering at the City College of New York. That’s where I first
encountered him, although we were not friends and moved in different circles.
“I don’t remember all students who participated in our program, but I do
remember him,” Nora Heaphy, then a program director for the college’s Colin
Powell Fellows in Leadership and Public Service, told me. “He was extremely
charismatic, very open to learning, very engaging.” While many fellows went to
entry-level jobs at D.C. think tanks upon graduation, she said, Ukaegbu stayed
in New York. In 2010, he and a friend he had met at a church concert, Kevaughn
Isaacs, launched a non-profit called Re:LIFE, training disconnected
youth in Harlem and Washington Heights to develop business plans and helping them
complete their education. Ukaegbu himself enrolled in distance learning courses
in business management and venture capitalism at Cornell, the University of
Pennsylvania, and Stanford, as his plans expanded.
In 2013, Re:LIFE ran its first startup class requiring each
trainee to launch a business by the program’s end, and Ukaegbu began noticing what he saw as bias. “I saw several brilliant founders who were not getting
funded,” he told me. “And I heard several bullshit stories from investors
explaining why they’re not funding these people.” In 2015, he founded Startup52, a Manhattan-based
“start-up accelerator” aiming to offer young entrepreneurs from
underrepresented demographics intensive coaching in polishing their business
plans and investor pitches. Two years later, the accolades for Re:LIFE and
Startup52 helped him receive a green card through the United States’ National
Interest Waiver category.
“This is the new Nigerian dream,” Damilare Ogunleye, Ukaegbu’s 33-year-old
Lagos-based campaign director, told me when we met in Lagos in January. “To
leave, embed in the system in the U.S., U.K., or wherever, and skip all the
problems of Nigeria.”
Nigerian politics, with its volatile mix of money, violence,
and humor, is often surreal. Out of more than 70 candidates who initially declared intent to stand in the presidential election on February 16, 2019, only two have emerged as frontrunners. One is
current president Buhari, who due to months’-long medical leaves in the U.K.
has had to publicly deny rumors that he has died and been replaced
by his double, a hypothetical Sudanese named Jubril. “It’s the real me,” Buhari
an audience of Nigerian expatriates at a U.N. climate summit in Poland in
December. Despite Transparency International placing Nigeria 148th out of
180 countries on a corruption index last
year, with scores virtually unchanged during a four-year presidency premised on
fighting corruption, Buhari has titled his reelection manifesto Next
Level, unironically vowing even greater progress.
Buhari’s chief rival is Atiku Abubakar, a 72-year-old
businessman who has been unable to travel to the U.S. since 2005, when an
American investigation implicated him in a transatlantic bribery case involving
an FBI sting, a telecommunications deal, and $90,000 stashed in a Louisiana
congressman’s freezer. Abubakar finally managed a visit to Washington, D.C. in
helped by Brian Ballard, a lobbyist with extensive links to the
Trump administration. In Nigeria, Abubakar has faced numerouscorruptionallegations.
“The surest way to riches and power,” former U.S. ambassador
Joseph Campbell and Chatham House fellow Matthew Page wrote
of Nigeria in 2018, “is through elected office and the opportunities for
kleptocratic state capture that it offers.” Successful candidates today are
by prior office holders who have amassed great wealth from public funds,
bankrolling party and campaign costs in exchange for a continued cut of national
treasure once the sponsored candidate assumes office. Political parties resemble
businesses, low on ideology or policy objectives, funded top-down by the godfather,
the top office holder, or, during campaigns, by the candidate for highest
Ukaegbu’s eccentric campaign began with a chance encounter at a
Johns Hopkins School of Advanced International Studies event last April. Ogunleye,
who worked on political campaigns while at a Nigerian communications firm from
2012 to 2015, was visiting with a delegation of Lagosian entrepreneurs—he
currently runs a T-shirt and mug customization website called Suvenia.com. The
two men bonded over a passion for tech innovation: Ukaegbu had been reaching
out to Nigerian officials and one presidential candidate with a technology
blueprint for the country, a platform politicians could adopt and promote, with
no response. “You should run,” Ogunleye told him.
In June, Ukaegbu began holding daily videoconferences from New
York with a core team: his older brother Chibueze, who runs a coding academy
and software firm in Aba; Ogunleye in Lagos; and a banker in Abuja. The four began
planning how to introduce Ukaegbu to voters, raise funds, and obtain a
political party nomination, as required by law. They found inspiration, Ogunleye
told me, in an Old
Testament anecdote in which four lepers, rather than starve in a
city besieged by the Syrian army, set out for the Syrian camp, hoping for
nourishment or swift death. When they arrive, they find it full of provisions
but abandoned: The Syrians heard the thunder of charging chariots and fled in
haste. “How do you with four lepers create the sound of chariots?” asked
Ogunleye, the sleeves of his suit pinned with card cufflinks featuring the ace
of diamonds and ace of spades. “Lead with the media.”
Nigerian news media didn’t bite, but through a contact in D.C.
Ukaegbu landed interviews on CNN,
and Al Jazeera.
“Everyone got interested,” said Ogunleye. “Who is this guy who is on all these
global platforms? What kind of backing does he have?”
In August, after 16 years in New York, Ukaegbu moved back to
Nigeria to launch his campaign. Other parties showed up to offer him office
nominations and cabinet positions with them—a standard tactic to neutralize political
opponents. During one meeting Ogunleye found particularly gratifying, a
consultant from Atiku Abubakar’s People’s Democratic Party, eager to
demonstrate that the unknown Ukaegbu stood no chance, tapped a bystander. “Do
you know this guy?” she asked. The man squinted. “Don’t force yourself,” said
the consultant. But the man kept looking at Ukeagbu. “Hey, aren’t you the guy
on CNN?” he said. “You’re good.”
At this point, the scrappy #Chike4Nigeria team included Re:LIFE
cofounder Isaacs, now a New York
high-school teacher of graphic design and photography, producing all campaign
graphics during early morning hours and school recess. Ukaegbu’s older brother,
Chibueze Ukaegbu Jr., CEO of a coding academy and software firm named
LearnFactory Nigeria, contributed personal funds to the campaign’s shoestring
budget and directed efforts of his eight full-time and four part-time staffers
to building websites, apps, and WhatsApp broadcasts. In New York, Ezinne
Kwubiri, head of inclusion and diversity at H&M, worked her contacts to
generate media coverage and organize fundraisers, in addition to helping
Ukaegbu manage his increasingly chaotic calendar.
By fall, which is when I came across the campaign on social
media, Ukaegbu had become the nominee of the new Advanced Allied Party (AAP).
AAP chose Safiya Ibrahim Ogoh, a woman from the country’s predominantly Muslim
north, as his running mate—balancing out Ukaegbu’s roots in the mostly
Ukeagbu’s campaign message is that Nigeria stands at a pivotal
point in its trajectory—young, rich in resources, and full of ingenuity, but
also undereducated, plagued by conflict and corruption, and underdeveloped. Without
leaders committed to aggressive investment in education and technology, he and
Ogunleye believe, Nigerians will be crushed by the impending fourth industrial
revolution—a revolution Ogunleye perceived signs of in China while on an
Alibaba fellowship in 2017. “You think it’s bad now?” Ukaegbu asked a crowd in
Philadelphia in November. “Just wait for artificial intelligence,” he said.
“That will be real hunger games.”
While not all experts may share Ukaegbu’s and Ogunleye’s tech
focus, many would agree that Nigeria is in trouble. In a 2019 analysis, Eurasia
the Nigerian presidential race among this year’s top ten geopolitical risks,
thanks to three probable electoral outcomes: Buhari wins (“an elderly, infirm
leader who lacks the energy, creativity, or political savvy to move the needle
on Nigeria’s most intractable problems”), Abubakar wins (“another gerontocrat
who would focus on enriching himself and his cronies”) or no clear winner
emerges (“a dangerous wildcard”). Meanwhile, relentless demographic growth
continues. In a 2018 report on “Severely Off-Track Countries,” Brookings projects
that by 2030 the number of extremely poor in the nation may reach 160 million.
“Wake up!” says Ukaegbu in one of his campaign videos. “We’re
in a fight for our lives, and we don’t even know it.”
One searingly hot day in January, I watched Ukaegbu deliver this
message to a group of kids and teachers at a private school in Abuja, in the
shade of a mango tree. Ukaegbu introduced himself to each person and remembered
their names at question time. Despite his dire warnings, he radiated enthusiasm
for what he called “amazing Nigeria.” Ben Dike, a teacher at the school who
later invited Ukaegbu to teach a fifth-grade math lesson, asked, “How do you
intend to compete with these gladiators?”
Realistically, Ukaegbu doesn’t stand a chance in the upcoming
election. (“Who’s that?” asked my driver in Lagos when I told him whom I’d come
to cover—a dampening response I had heard more than once.) It doesn’t seem to
bother him. Ukaegbu “doesn’t see ceilings,” his cousin Ezemdi Akalonu, a junior
at Brown University, told me. His family knows him as a 17-year-old who ran
away from home to attend Lagos University instead of the local college
prescribed by his parents—enraging his mother, a school principal, and his
father, a civil servant and entrepreneur—before looking further afield, to the
United States. Now, he’s the local boy made good: His high school in Aba invites
him back to speak whenever he visits.
Following the speech at the Abuja school, Ukaegbu and his
brother Chibueze decamped to a spartan mezzanine room in a mostly vacant office
building. An enormous desk in the otherwise empty, low-ceilinged space, and a bizarrely
miniature bathroom gave the room an Alice-in-Wonderland vibe. Chibueze peered
into his laptop, working on a website that would allow voters to find
young candidates by district. Chike monitored some of the two dozen WhatsApp
channels that he said bombard him with “five thousand messages a day.” The
lights went off, as frequently happens, until a generator kicked in, and a fan
resumed noisily circulating tropical air.
“We’re on the list,” Ukaegbu said, scanning a copy of the roster
of presidential candidates, just released by Nigeria’s Independent
National Electoral Commission. It was a relief—backroom intrigue and string-pulling,
whether from within his own party or elsewhere, could no longer prevent his
name from appearing on the ballot.
Despite the early buzz, Ukaegbu’s run has struggled to attract
donors. The party, which expected Ukaegbu to bring in the money, is not happy
about it. The #Chike4Nigeria team predicted enthusiastic donations from the highly
educated diaspora, which remitted home $25 billion in 2018, World
Bank data show. According to numbers
published by the Center for Social Justice, one tenth of one percent of that
sum would match the total for media expenditures made in the 2015 campaign by
PDP, the biggest spender and, as the then-governing party with a sitting
president, an entity with unrivaled, direct, and unscrutinized access to the
country’s oil revenue. But the money hasn’t materialized.
The team has also encountered skepticism from the very Nigerian
demographic it hoped would welcome Ukaegbu’s bid: the 18-to-35-year-old voters
up more than half the electorate. Instead, the most excited supporters
have often been retirees, people in their sixties and older.
It’s hard to know how much of that is about the Ukaegbu
campaign, and how much of it is about the realities of Nigerian politics. When
I asked Matthew Page whether he could imagine an outsider, even a
well-resourced one, mounting a successful presidential bid without established
patronage-client relationships, he seemed unconvinced. “I think we’re a few
decades away from that,” he said.
As I was boarding a flight back to New York in late January,
Ukaegbu called me. He sounded tired. For days he had been trying to connect me
with his vice-presidential pick, but she kept demurring. Two nights before, at
a tense party meeting in a Stygian room in Abuja, I had watched him square off against members
who wanted to declare support for Buhari and catch the last trickle of cash
from the dwindling stream of the ruling party’s electoral funds. I asked
whether he felt discouraged. He said he did not. “It’s certainly been an
education,” he told me on the call. “It’s been like a PhD in Nigerian
In February 2003, Al Jazeera broadcast an interview with Donald Rumsfeld, then George W. Bush’s secretary of defense. “Would it worry you,” the interviewer asked him, “if you go by force into Iraq that this might create the impression that the United States is becoming an imperial, colonial power?” Rumsfeld swatted away the question. “I’m sure that some people would say that,” he responded, “but it can’t be true because we’re not a colonial power. We’ve never been a colonial power.… That’s just not what the United States does. We never have and we never will.” In March, the United States invaded Iraq. By April, it was operating a government of occupation. By May, it had effectively placed a proconsul in charge of the country.
In the early years of the Iraq war, the idea of the United States as an imperial power was, for a moment, a subject of serious debate. Longstanding left-wing critics of empire like Noam Chomsky were now joined by conservative hawks such as Niall Ferguson in agreeing that the United States was an empire, though they differed deeply on whether this was a good thing. But both Rumsfeld and the journalist questioning him exhibited a kind of historical amnesia. Rumsfeld denies the possibility that the United States could ever be an empire; the journalist asks if it is in the process of becoming one. But what if it had been all along?
That is the question Daniel Immerwahr pursues in How to Hide an Empire: A History of the Greater United States. “One of the truly distinctive features of the United States’ empire,” he observes, “is how persistently ignored it has been.” In order to address this historical amnesia, we must, he argues, consider the United States not as it is typically represented on the map—as the mainland United States with corners in Washington state, Maine, Florida, and Southern California—but as a collection of all of the territories in which the United States has exercised sovereignty. This “Greater United States” includes not only Puerto Rico, whose colonial status is at least widely recognized if not deeply considered, but also other territories ranging from rocks covered in bird excrement to the approximately 800 military bases that the United States still operates around the world. (Britain and France have 13 bases combined; Russia has nine.)
The book is written in 22 brisk chapters, full of lively characters, dollops of humor, and surprising facts. (Did you know that the U.S. greenback, for example, is modeled on the Philippine colonial currency, and not the other way around?) It entertains and means to do so. But its purpose is quite serious: to shift the way that people think about American history. Americans tend to see their country as a nation-state, not as an imperial power. As such, its global reach and influence are often invisible to its own citizens. So are the complex reactions that its actions around the world produce. Without an understanding of empire, Americans may see events elsewhere—a caravan of Central American migrants heading through Mexico, let’s say—as foreign threats, rather than as a consequence of the global distribution of power and violence, as something shaped by the history and politics of the United States.
Histories of U.S. empire often start in 1898 with the Spanish-American War, out of which the United States took control of Puerto Rico and the Philippines. But nearly since its founding, the United States had been in the process of expanding and debating how it would expand from 13 Eastern states across the continent. Early national elites were divided on how to apportion land for Native Americans and maintain peace between them and European settlers. Daniel Boone, for years taught to schoolchildren as a “pioneer,” was in his own time a criminal, who, by taking white settlers westward into Native American lands, risked involving the U.S. government in conflict. Thomas Jefferson’s original idea for the Louisiana Purchase was that it would primarily provide access to southern ports, and that much of the remaining land would be kept for Native Americans, along with free black people, Catholics, and others he judged unfit for citizenship.
But over the course of the nineteenth century, opportunities for profit pushed individuals and governments to seize more and more land. The white settlers’ beliefs about racial difference served to both justify this expansion and to limit it. New states applied for admission to the union: some successfully, others—like Lincoln, West Dakota, Deseret, Cimarron, Montezuma, and the majority-Indian Sequoyah—were rejected. After the U.S.-Mexico War of 1846 to 1848, the United States acquired not the populous central and southern zones but only the relatively sparsely populated northern territories of Mexico, partly because of racism. Representatives who opposed expansion for moral reasons were joined by those who opposed trying to incorporate large numbers of people they considered racially unfit for democratic government. Those new territories would become states decades later only when white rule could be assured.
But while racist thought shaped the country’s enlargement, commerce was never far from the discussion either. One particularly strong chapter in How to Hide an Empire deals with the “guano islands” scattered throughout the Pacific and the Caribbean. By the 1850s, guano had become a highly prized commodity. Intensive, industrialized agriculture required supplies of nitrogen fertilizer, which could be made from bird excrement. To this end, an 1856 law allowed any American citizen to “take peaceable possession” of any previously unclaimed island where they discovered guano deposits. In that simple manner, the territory would come to belong to the United States.
While territories that bordered the nation could eventually expect full statehood, the guano islands resembled traditional overseas colonies. Because the islands were unpopulated—apart from the birds—the workers who would toil among mountains of guano often had to be tricked and coerced into going there. In some places, business owners employed Native Hawaiians; in others they exploited African Americans, who labored in conditions resembling convict camps. An 1889 uprising by workers on Navassa Island, off the coast of Haiti, led to the deaths of five whites; President Benjamin Harrison commuted the death sentences of its leaders after an investigation revealed that the workers had no recourse to any government official. This position nevertheless reinforced the notion that the islands were part of the United States.
The years after 1898 usually are seen as a rupture in the history of U.S. empire, as the United States acquired more substantial overseas colonies. But it wasn’t the fact of expansion that was new—that had been ongoing throughout the nineteenth century. Nor was it the conflict with foreign nations and the conquest of territory—Native American communities were foreign nations and had suffered violence and displacement as a result. Nor, for that matter, was it the acquisition of territory outside of the mainland—as the guano islands show. What was different was that the United States had acquired overseas territories with substantial native populations understood as nonwhite.
At first, the United States carried out its conquests of Puerto Rico, Cuba, and the Philippines in the name of liberation from Spanish tyranny. But hopes for genuine self-rule among the people in those former Spanish colonies soon met the reality of U.S. occupation. Cuba gained technical independence but with compromised sovereignty, as the United States insisted on the right to intervene in its affairs and frequently did so. In the Philippines, pro-independence forces led by Emilio Aguinaldo suffered a brutal counterinsurgent campaign, which cast doubt on the efficacy, not to mention the morality, of U.S. policy. Mark Twain, who became the most prominent anti-imperialist in the mainland United States, imagined colonial subjects thinking: “There must be two Americas: one that sets the captive free, and one that takes a once-captive’s new freedom away from him, and picks a quarrel with him with nothing to found it on; then kills him to get his land.”
As Immerwahr puts it, the United States faced a trilemma. It could have at most two of the following three things: republicanism, white supremacy, or overseas expansion. It was republicanism that lost out. In a 1901 ruling, the SupremeCourt established that the Constitution did not fully apply to overseas territories. Their residents did not have the same rights as mainland Americans. Immerwahr details the many consequences of making these legal gray zones: counterinsurgency in the Philippines followed by colonial government; medical experimentation and sterilizations in Puerto Rico, and an economy arranged around exploiting its status as a tax haven; and Hawaii turned over to the military after Pearl Harbor. By the end of World War II, Immerwahr reckons, there were more people living in colonies and under U.S. occupation—including in Japan and the U.S. zones of Germany—than in the mainland.
In an era of airplanes and wireless communication, territory was not required for dominance; indeed, it could be a source of friction.
This was a temporary condition. The third era in the history of U.S. empire, from the postwar period to the present, saw a retreat from formal occupation. The Philippines obtained independence in 1946; Alaska and Hawaii got full incorporation; Puerto Rico, Guam, and Samoa got civilian rule. But the United States was not necessarily acting out of altruism. The struggles of colonized people for independence, anti-Filipino sentiment in California, and America’s desire to claim the mantle of freedom in the Cold War conflict with the Soviet Union were all part of the picture. The United States was also strategically shedding the undesirable parts of its power: the obligation to maintain order, to support colonial administrations, and, from the racist point of view of many white Americans, the colonial ties that facilitated migration to the United States.
Instead, it now kept an empire of bases: signing a 99-year lease for land in the Philippines, keeping control of Vieques in Puerto Rico, Guantánamo in Cuba, Okinawa in Japan, and many hundreds of other locations. In an era of airplanes and wireless communication, territory was not required for dominance; indeed, it could be a source of friction. Immerwahr, borrowing a term from historian Bill Rankin, calls this new geography of power the “pointillist empire,” and others have called it an “archipelago.”
Immerwahr’s book will arrive as perhaps the most hotly anticipated release in American diplomatic history in some time, in part because of an unusual professional dispute: He debuted the arguments of this book in early 2016 in the Society for Historians of American Foreign Relations’ prestigious Bernath lecture; the text of his speech was published, as is customary, in the field’s leading journal, Diplomatic History. So far, so normal. What set tongues wagging was Diplomatic History ’s decision to publish, on the eve of the appearance of Immerwahr’s book, an article-length rebuttal to the Bernath lecture, written by Paul Kramer, a historian of the Philippines, and entitled “How Not to Write the History of Empire.” Since its appearance in September last year, the Immerwahr-Kramer affair has been one of the first subjects of conversation among historians in this field.
For an article in an academic publication, Kramer’s piece made an astonishingly personal attack. It accuses Immerwahr’s essay of “reflecting deep historical currents of nationalist arrogance and short-sightedness.” Kramer says that in Immerwahr’s account, the people of the colonies rarely appear and only count when they are part of empire; that the colonies matter to him only for the way they affect the history of the United States. Kramer also objects to Immerwahr’s claim that mainstream American history has not accounted for the imperial role of the United States. Who, Kramer asks, counts as mainstream? His article includes an eight-page bibliographic appendix, listing books and dissertations on the subject of American colonialism.
If How to Hide an Empire resembled the work described by Kramer’s critique, it would indeed be a problem. Yet as How to Hide an Empire makes clear, Immerwahr’s argument is mostly intended for a mass audience, who won’t likely be familiar with the large body of work on this subject. Its goal is to convince a reading public of the centrality of empire to U.S. history. To do this, it does anchor parts of the argument with familiar figures; it is written from the mainland out, rather than the colonies in. Making the opposite choice would also yield an interesting and valuable book, but it would be a different book. “The history of the United States is the history of empire” is the line that closes How to Hide an Empire, and that is the point. On this, Kramer and Immerwahr would surely agree, in spite of the sparks.
Kramer’s strongest argument is that writing a history of U.S. empire as a history of territory leaves out a great deal, since “most expressions of American global power in the twentieth century” do not involve the conquest of new lands. Generations of American political leaders and government officials have sought and successfully developed more informal mechanisms of control, from economic pressure to CIA intervention. Parts of How to Hide an Empire bear out this critique. Writing of the 1960s, by which time Alaska and Hawaii had become states and the only colonies that remained were the Virgin Islands, American Samoa, Guam and other parts of Micronesia, Immerwahr asks, “Where had [America’s] imperialist spirit gone?” But in a decade that saw major escalation of the war in Vietnam, the U.S. invasion of the Dominican Republic, and America’s encouragement of political violence in Brazil and Indonesia, it’s not hard to find the “imperialist spirit” at work.
Immerwahr’s response is that thinking about territory, and not just informal mechanisms of control, can lead to important insights. (This approach also has the potential to reach people who are not convinced that indirect control or influence is worth describing as “empire.”) Immerwahr explains, for example, that the United States was willing to retreat from formal colonial possession when it could substitute the gains of empire at home, whether through natural resources or new technology. Indeed, one reason why the United States did not acquire major overseas colonies in tropical zones, while other European powers did, is that the country is already so large that it passes through multiple climactic zones, could produce products like sugar domestically, and possessed abundant re-sources of strategic products such as oil. As the twentieth century moved onward, synthetic chemicals further reduced the need for access to tropical markets. World War II saw acute shortages of rubber, for example, which the United States had largely obtained from European colonies, now seized by Japan. FDR imposed a national speed limit of 35 miles per hour to save tires; but by the end of the war, synthetic rubber had solved the problem.
The size of the U.S. economy in the postwar era—it accounted for half of global economic output—allowed it to spread international standards and practices (the octagonal red stop sign is nearly universal), spread the use of English, and shape the global economy, all but guaranteeing the United States access to the resources it needed. The possibilities of chemical substitution had dramatically lessened fears that the country would run out of resources: Fibers could be replaced by nylon or polyester, and plastic proliferated throughout postwar consumer society. The major exception was oil, as U.S. demand began to exceed supply. It is a telling exception, for it is precisely for access to the lifeblood of the global economy that the United States has repeatedly been willing to transgress international law. Facing the oil crises of the 1970s, Henry Kissinger mused that the United States “may have to take some oil fields.” “I’m not saying we have to take over Saudi Arabia,” he remarked at a National Security Council meeting in January 1975. “How about Abu Dhabi, or Libya?”
Immerwahr convincingly argues that the United States looks less like an empire than its European counterparts did not because U.S. policy maintains any inherent commitment to anti-imperialism, but because its empire is disguised first as continuous territory and later by the development of substitutes for formal territorial control. The United States “replaced colonies with chemistry,” and partially “substituted technology for territory.” It is a powerful and illuminating economic argument. To this, I think it must be added that non-territorial mechanisms of control, from CIA interventions to the policies of the International Monetary Fund, were essential to building and maintaining U.S. power in the twentieth century. That power can’t be understood without the Marshall Plan, or America’s support for the removal of Prime Minister Mohammed Mossadegh in Iran in 1953 and of President Salvador Allende in Chile in 1973. These get little attention in a book with a focus on formal territory. The pointillist strategy is part of hiding an empire, but it is not the whole story.
Nevertheless, the book succeeds in its core goal: to recast American history as a history of the “Greater United States.” Immerwahr’s final chapter, for example, centers on the bin Laden family. Yemen-born Mohamed bin Laden became the Saudi government’s preferred builder in the 1950s and worked on many classified projects for the U.S. military before his death in a plane crash in 1967. One of his 54 children, Osama, embraced a radically anti-Western interpretation of Islam. His primary grievance, among a litany of complaints about Western culture and behavior, was the presence of U.S. troops on bases in Saudi Arabia, reopened after the first Gulf War. From Afghanistan, he planned the attacks of September 11, 2001, which killed thousands and baited the Bush administration into a conceptually endless War on Terror.
Two of the things that have characterized the War on Terror have been the use of torture, carried out in the legal gray zones of Guantánamo Bay and military bases throughout the world, and the use of drones launched from a similar list of locations. It has been a war both provoked by America’s empire of bases, and fought from them. It is typical of U.S. imperial history that someone like Donald Rumsfeld could deny the very existence of empire while authorizing brutality that depends on it. Immerwahr’s book deserves a wide audience, and it should find one. In making the contours of past power more visible, How to Hide an Empire may help make it possible to imagine future alternatives.
*This article has been updated to correct the date of the Navassa uprising.
I don't know why I want to buy this considering I thought Sunless Seas was merely "ok"
The word ‘lonely’ comes up often when discussing Sunless Skies, which seems like an odd thing to say about a game in which you haul yourself around the stars in the company of up to two-dozen crew members. But that’s the tone of Failbetter’s twisted sci-fi Victoriana roguelike: feeling desperately alone and vulnerable, in a desperately large and lethal place. Those crew? They’re all nameless, faceless, hired only to die on your dime. In Sunless Skies’ merciless vacuum, care is a luxury you cannot afford.
On Monday, a few hours before Donald Trump called for “a total and complete shutdown of Muslims entering the United States,” Ted Cruz was asked whether he expects Trump to come after him, now that one leading poll has the Texas senator ahead in the coveted early voting state of Iowa. “Listen, I like and respect Donald Trump,” said Cruz. “I continue to like and respect Donald Trump. While other candidates in this race have gone out of their way to throw rocks at him, to insult him, I have consistently declined to do so, and I have no intention of changing that now.”
True to his word, Cruz refused to join the pack of Republican hopefuls who piled onto the front-runner’s latest obscenity. At a press conference the following morning to announce a Senate bill barring the resettlement of Syrian refugees, Cruz appeared alongside Texas Governor Greg Abbott and continued to dance around the question of Trump’s naked racism, at one point commending the Donald for “focusing the American people’s attention” on the urgency of fending off foreign invaders. Pressed for a direct response to Trump’s ban on Muslims, Cruz finally conceded, “I do not agree with [Trump’s] proposal. I do not think it is the right solution.”
The right solution, you may be surprised to learn, is Cruz’s solution, which he just happened to introduce in the Senate the morning after Trump belched out his own.The modestly titled “Terrorist Refugee Infiltration Prevention Act” would substitute Trump’s blanket, possibly unconstitutional banwith a more targeted—and, in certain senses, crueler—three-year moratorium on the resettlement of refugees from Syria, Iraq, Libya, Somalia, Yemen, and any other country determined to contain “terrorist-controlled territory.” Where Trump’s answer is typically lacking in nuance, Cruz’s bill is designed to “focus very directly on the threat.” He’s casting it as the principled, measured alternative to a vaguely defined problem that both candidates insist exists.
“This is not about the Islamic faith,” Cruz explained to NPR’s Steve Inskeep on Wednesday. “It is about Islamism, which is a very different thing.” The conservatives Cruz is courting don’t appear to recognize the distinction, and it would be naive to think that Cruz isn’t perfectly aware of that. According to a new Bloomberg poll, two-thirds of likely Republican voters support Trump’s indiscriminate prohibition; one-third say it makes them more inclined to vote for him.
If Cruz truly wanted to set his intentions apart from Trump’s, he could start by refuting the white-supremacist propaganda Trump has pointed to as evidence that “Muslim” is indeed synonymous with “terrorist sympathizer.” But Cruz, the champion debater and seasoned appellate attorney, is careful to present his disagreement with Trump as rooted in policy, not premise. “That is not my view of how we should approach it,” Cruz told NPR. He’s happy to let voters decide what the “it” is.
Trump’s precipitous descent into outright fascism is widely considered to be a problem for the GOP—and in some ways it is. But for Cruz, never a party loyalist to begin with, it’s also created a unique opportunity to channel the energies of racial anxiety into a comparatively palatable, mainstream campaign for the presidency. A number of commentators have noted that Cruz is positioning himself to consolidate Trump’s support in the eventual event of his collapse—which, we keep being told, will be arriving any day now.
But the net, and more dangerous, effect of Cruz’s strategy is to legitimize the racism that informs Trump’s. Two weeks ago, Cruz was on the extreme end of a national debate over admitting people fleeing the ravages of countries the United States has made war on. By allowing Trump to “effectively outbid” him in the wake of the San Bernadino massacre, as NPR’s Inskeep put it, Cruz has come out looking relatively moderate and responsible in an entirely new discussion about whether the basis of U.S. policy should be overt xenophobia or implied xenophobia.
The other candidates may recognize the dilemma posed by the stubborn popularity of Trump’s ravings, but no one has been as deliberate, or effective, in incorporating the strains of white nationalism into their own overarching strategy as Cruz has. He’s hewed closely—but, critically, not too closely—to Trump’s noxious line on immigration and refugees, which Cruz frequently ties together with warnings of an impending invasion from the south. “Border security is national security,” he said in a statement on Sunday prior to President Obama’s address about terrorism and the San Bernadino shootings. “I will shut down the broken immigration system that is letting jihadists into our country,” he reiterated later.
So far, Trump’s flamboyant nativism has drawn all the scrutiny, leaving Cruz to concentrate on raising money and building out his ground game. He knows better than to openly embrace the most jarring of Trump’s flourishes, but he won’t attack them, either—and when others do, Cruz is right there holding the flank. President Obama sounds like a “condescending school marm lecturing the American people against Islamophobia,” Cruz told NPR’s Inskeep. At the last Republican debate, he invoked his Cuban-American heritage as a cover for the field’s more general shift in the direction of mass deportation and wall-building: “For those of us who believe people ought to come to this country legally, and we should enforce the law, we’re tired of being told it’s anti-immigrant. It’s offensive.” Two weeks later, campaigning on the road in Iowa alongside Representative Steve “Cantaloupe Calves” King of Iowa, perhaps the most aggressively ignorant anti-immigration crusader in Congress, Cruz assured reporters that “tone matters” when it comes to these issues.
In an effort to explain his latest step down the road to the internment camp, some have speculated that Trump is attempting to fend off Cruz’s surging poll numbers. If so, he misunderstands the nature of Cruz’s maneuvering, as well as the depth of Cruz’s patience. With each reflexive lurch toward a darker, more explicitly ugly politics, Trump draws more attention to himself but also clears more ideological space for Cruz. Lindsey Graham, who’s polling somewhere ahead of Louis Farrakhan in the race for the Republican nomination, told the Guardian, “It’s time for Ted Cruz to quit hiding in the weeds and speak out against Donald Trump’s xenophobia and racial bigotry.”
But Ted Cruz likes it in the weeds just fine. He’s made it this far trudging through the muck, and there’s no reason for him to change course anytime soon.
The other two men in the photograph, despite presumably being police officers, are not identifiable at this time. Unlike normal police officers, they are not wearing name tags or badges with visible numbers on them. When police arrested the Washington Post's Wesley Lowery and the Huffington Post's Ryan Reilly, they weren't wearing badges or nametags either. Reasonable people can disagree about when, exactly, it's appropriate for cops to fire tear gas into crowds. But there's really no room for disagreement about when it's reasonable for officers of the law to take off their badges and start policing anonymously.
many cops operating in Ferguson are betting on impunity, and it seems to be a winning bet
There's only one reason to do this: to evade accountability for your actions.
Olson was released shortly after his arrest, as were Reilly and Lowery before him. Ryan Devereaux from The Intercept and Lukas Hermsmeier from the German tabloid Bild were likewise arrested last night and released without charges after an overnight stay in jail. In other words, they never should have been arrested in the first place. But nothing's being done to punish the mystery officers who did the arresting.
And what's particularly shocking about this form of evasion is how shallow it is. I can't identify the officers in that photograph. But the faces are clearly visible. The brass at the Ferguson Police Department, Saint Louis County Police Department, and Missouri Highway Patrol should be able to easily identify the two officers who are out improperly arrested photographers. By the same token, video taken at the Lowery and Reilly arrests should allow for the same to be done in that case.
Policing without a nametag can help you avoid accountability from the press or from citizens, but it can't possibly help you avoid accountability from the bosses.
on another level, it would almost be nicer to hear that nobody in charge thinks there's been any misconduct
For that you have to count on an atmosphere of utter impunity. It's a bet many cops operating in Ferguson are making, and it seems to be a winning bet.
In his statement today, President Obama observed that "there's no excuse for excessive force by police or any action that denies people the right to protest peacefully," seeking to tap into the widespread view that some instances of excessive force and denial of first amendment rights have taken place. But Obama did not even vaguely hint that any officer of the law would or should face even the slightest sanction for this inexcusable behavior.
Statements from Governor Jay Nixon and Highway Patrol Captain Ron Johnson have suffered from the same problem. It is nice, of course, to hear that one's concerns are in some sense shared by the people in power.
But on another level, it would almost be nicer to hear that nobody in charge thinks there's been any misconduct. After all, a lack of police misconduct would be an excellent reason for a lack of any disciplinary action. What we have is something much scarier. Impunity. The sense that misconduct will occur and even be acknowledged without punishment. Of course there are some limits to impunity. Shoot an unarmed teenager in broad daylight in front of witnesses, and there'll be an investigation. But rough up a reporter in a McDonalds for no reason? Tear-gas an 8 year-old? Parade in front of the cameras with no badges on? No problem.
According to a Pew poll released earlier today, most white people have a good amount of confidence in the investigation into Michael Brown's death. They have the good sense, however, to at least admit to some misgivings about the handling of the protests.
What they ought to see is that the two are hardly so separable. The protests would not be handled so poorly if the officers doing the handling felt that they were accountable for their actions. And a policing culture that doesn't believe cops should be accountable for their actions is not a culture that lends itself to a credible investigation.
"Theirs is the nerd-dom of Star Wars, not Star Trek; of Mario Kart and not World of Warcraft; of the latest X-Men movie rather than the comics themselves." -- these guys know that star trek is/was written by a bunch of soggy lefties right?
The National Review recently published an odd, but interesting, essay by Charles W. Cooke about "America's nerd problem."
The article begins by accusing a number of writers, broadcasters, politicians and scientists (including myself, Matt Yglesias and Dylan Matthews; as well as Neil deGrasse Tyson and Al Gore) of being faux-nerds: "Theirs is the nerd-dom of Star Wars, not Star Trek; of Mario Kart and not World of Warcraft; of the latest X-Men movie rather than the comics themselves."
Yglesias defends himself against the scurrilous accusation that he's not a a true trekkie here. And anyone who doubts Matthews' nerd credentials has never met him. Still, I'll cop to some of the charges: I prefer Mario Kart to World of Warcraft and have little patience for either Star Trek or Star Wars. My knowledge of old X-Men comics, however, is embarrassingly complete (and let's not get started on X-Force).
It’s like a magazine, but it’s for nerds.
Nerd-offs aside, Cooke's essay, though putatively about progressivism, is an interesting window into the state of contemporary conservatism. The old conservative critique of nerds — or, to be more precise about it, technocrats and intellectuals — was that their approach to knowledge was fundamentally flawed.
"I would rather be governed by the first 2,000 people in the Boston telephone directory than by the 2,000 people on the faculty of Harvard University," William F. Buckley, the founder of The National Review, famously said. (I admit the data here is poor, but I would guess that the Boston telephone directory tilts towards Star Wars while the Harvard faculty favors Star Trek.)
Yuval Levin, one of conservatism's foremost thinkers, has argued that America's two political traditions are rooted in this debate.
He's written a book about the arguments between Edmund Burke and Thomas Paine, which he frames as a disagreement about what we can know in society, with Burke prizing "social knowledge" and Paine prizing "technical knowledge." Conservatism, he's argued, emerges from the Burkean tradition: it is skeptical of what the nerds can know and reverent of what the common man has learned — that's where you get Buckley's quote, for instance. Liberalism, he says, is just the reverse.
Cooke's essay is convoluted and expresses, at times, both pro- and anti-nerd sentiments. But the framing is clear, and it reflects an emergent trend in conservatism. Its argument isn't the classically conservative argument that the left is full of nerds and their ambitious, arrogant designs should be mistrusted; it's that the left is full of faux-nerds who lack scientific training but nevertheless wear glasses — and their ambitious, arrogant designs should be mistrusted. Or, to put it more simply, the problem isn't nerds so much as poseurs.
"Sorry, America," he concludes. "Science is important. But these are not the nerds you're looking for."
A version of this transition can be seen in the Republican Party's lurch from George W. Bush to Paul Ryan. The left's knock on Bush (and, before him, Reagan) was that he was dumb and inarticulate; the right's riposte was that the left prized the wrong kind of knowledge, and that Republicans were smart enough to know that ordinary Americans were a helluva lot wiser than Ivy League elitists. And the right was comfortable in that response: it was an argument they kept winning at the polls.
But after Bush's disastrous presidency and Obama's political successes, today's Republicans don't want to towel snap the Democratic Party's nerds. They want to out-nerd them. The party's standard-bearer, insofar as there is one, is Paul Ryan. He became the GOP's champion after video of him blasting President Obama with charts and graphs during the Blair House debate over health-care reform went viral. The National Reviewcalled it "a devastating critique" that proved "that the Democrats just don't have an answer to Ryan's arguments." (The answer to Ryan's arguments was that they were mostly wrong.) A few weeks later Real Clear Politicswrote that you could easily imagine Ryan at a Star Trek convention.
Ryan then became the Republican Party's vice presidential nominee on the strength of his unusually detailed budgets, which, again, were contrasted with Obama's faux-wonkery: "In an era that seemingly rewards shallow oratorical excellence over substance (see Obama, Barack Hussein), his political brilliance is the capacity to educate on a vision, run on a record of accomplishment, and — yes — stand on his feet and talk persuasively about both," enthused conservative economist Doug Holtz-Eakin at the Daily Caller.
Even the case for Ted Cruz gets made in terms of a nerd-off with Obama.
"Cruz went to Princeton University, where he was a national champion debater, and got a law degree from Harvard. Cruz's legal career was objectively more impressive than Obama's," wrote Jonah Goldberg at the National Review. "He clerked on the appellate court and for Chief Justice William Rehnquist on the Supreme Court. He held numerous prestigious jobs in and out of government. Like Obama, he taught law, but Cruz was also the solicitor general of Texas and argued before the Supreme Court nine times." His World of Warcraft guild could probably crush Obama's.
More relevantly, according to Forward he's very active in a number of US-based pro-Israel groups of various stripes:
But the foundation also has a focus on Israel advocacy. Klarman has been a board member of, and a major donor to, The Israel Project, a fast-growing pro-Israel advocacy group that seeks to provide information useful to working journalists. He gave the group nearly $4 million between 2008 and 2010.
The foundation has also given smaller amounts to the Middle East Media Research Group, an anti-Islamist research group whose board members include Elliott Abrams, a senior aide in several Republican administrations, and Steve Emerson, a researcher devoted to exposing ties, as he perceives them, between American Muslims and extremist Muslim movements. Klarman has also contributed to the Committee for Accuracy in Middle East Reporting, a group devoted to combating what it sees as anti-Israel bias in the media.
Klarman has also been the longtime chairman of The David Project, a Boston-based group mostly concerned with pro-Israel advocacy on campus. The group is also known for its long-running, and ultimately failed, effort to oppose the construction of a Boston mosque. Klarman said in an interview with the Forward that his interest in The David Project was in its campus work. The group has recently adopted a more moderate approach to campus activism.
Obviously the folks running the show at the Times of Israel have the good sense to recognize after the fact that open calls for genocide are not in keeping with an institutional mission to try to make Israel look good in the world.
"Hahahahaha! We're completely insulated from the consequences of the war on drugs."
When NBC's Meet the Press over the weekend held a roundtable about the New York Times Editorial Board's decision to endorse marijuana legalization, participants seemed to take the issue very lightly — regularly making jokes between a few serious policy points.
It was obvious where the conversation would go from the start, when host David Gregory mentioned marijuana and giggles went around the table. From that point, the jokes flowed. "I don't know what they've been smoking up there," said columnist David Brooks about the New York Times Editorial Board. Judy Woodruff of PBS Newshour said, "When I think of grass, I think of something to walk on. When I think of pot, I think of something to put a plant in."
there's a very serious disconnect about what marijuana legalization would mean for America
The chuckles are typical in conversations about marijuana policy. At one of his first town halls, President Barack Obama joked, "There was one question that was voted on that ranked fairly high: that was whether legalizing marijuana would improve the economy and job creation. And I don't know what this says about the online audience." In June, former President Bill Clinton asked, "Rocky Mountain high?" to chuckles before going into an answer that seemed to support state-based reform. Hillary Clinton got in some jokes about marijuana before answering a similar question at a CNN-hosted town hall in the spring.
In some cases, these quips help lighten a conversation about drugs that many Americans, especially parents, are simply uncomfortable with. But the jokes also reflect a problem in discussions about US drug policy: there's a very serious disconnect about what marijuana legalization means for Americans.
It's easy to joke about marijuana policy when the idea of legalization feels more like a new freedom, which might be the case for whiter and wealthier populations. As someone from a privileged background who socializes with people from similarly privileged backgrounds, my social circle's conversations about pot legalization largely revolve around how cool and liberating it might be to buy pot legally. What we rarely mention in these conversations: race, the criminal justice system, and the fear of getting arrested if someone were to buy pot illegally.
black people are 3.7 times more likely to get arrested for pot possession
But for minority, poorer populations, marijuana policy is much closer to a civil rights issue. Marijuana isn't just a drug that they would like to be able to use and carry out in the open. Marijuana criminalization has historically been used to harass and arrest people in minority and poor communities at hugely disproportionate rates.
Black and white people use pot at similar rates, but black people are much more likely to be arrested for it. (ACLU)
These racially disproportionate marijuana-related arrest rates remained in some states even after decriminalization, when criminal penalties are removed but the drug remains technically illegal.
New York, for instance, decriminalized marijuana in 1977, but as of 2012 had one of the highest arrest rates for pot possession. The problem: New York law allows arrests for marijuana that's within public view. Police officers in New York City regularly used this exception to arrest people, particularly minorities, by getting them to empty their pockets during stop-and-frisk searches and expose marijuana that would otherwise have remained hidden. (According to a report from the New York City Public Advocate's office, the vast majority of stop-and-frisk searches in 2012 — roughly 84 percent — involved black or Hispanic people.)
There is a legitimate debate to be had about whether these arrests focus on drug traffickers instead of users, highlight broader problems in the war on drugs and the criminal justice system, or signify higher crime rates among minority communities.
marijuana policy is simply no joking matter
But Meet the Press didn't even give that debate a chance. The roundtable instead focused on the health effects and whether legalization increases pot use. These are very important issues that need to be discussed, but they're also the kinds of issues more privileged Americans can focus on because they just don't see the skewed effects of criminalization in their everyday lives.
Just imagine, for example, if the couple of minutes the roundtable spent on jokes were instead spent discussing racially uneven drug policy enforcement. As Ryan Cooper of the Week points out, this would be much more valuable to Meet the Press' audience. Because for a large chunk of the US population, marijuana policy is simply no joking matter.
To learn more about marijuana legalization, read our full explainer and watch the video below:
If the fire-breathing dragon wasn't hint enough, one bite will prove this cake is packing heat. In her recently released cookbook, Sweet and Vicious: Baking with Attitude, Libbie Summers stirs hot pepper extract into a lightly spicy batter, and spikes the cream cheese frosting with spiced pecans. The fruitiness of the pepper works well with the carrot-heavy batter, further enhanced by traditional cinnamon, nutmeg, and ground cloves.
Everyone agrees the corporate income tax is broken, but meaningful changes to it never seem to happen. And despite a flurry of recent attention, action on inversions — reincorporating a business in a foreign country in order to take advantage of lower rates — seems to keep being punted into the future.
Obama railed against inversions this week, calling for "economic patriotism." But Senate Democrats say they don't think Congress can address the issue before August recess, as Bloomberg reported this week. And even then, agreement looks tough: Republicans want broader tax-code reform, and not all Democrats are behind Obama.
Corporations are shouldering far less of a tax burden than they used to. Corporate tax revenues have declined as a share of GDP over the years, but individual tax revenues have held steady, according to a 2013 GAO report.
Corporations account for a much smaller share of the tax revenue pie than they used to. In 1952, corporations accounted for 32.1 percent of federal revenue. As of 2013, it was less than 10 percent.
It's understandable why US corporations seek out inversions — the US has the highest nominal corporate tax rate among developed countries, with a 35 percent top federal rate and a 39.1 percent average combined rate. While other countries' rates have fallen, the U.S.'s has stayed high.
But corporations are also very good at finding ways to pay less. The GAO found that all corporations who filed M-3s (a tax form for large and international corporations) paid an effective tax rate of 22.7 percent in 2013. Among profitable companies only, the rate was even lower, at 17 percent.
So even while the US corporate tax rate is high, its corporate tax revenue collections are low. That's one reason why, as of 2011, the US was on the low end of corporate tax revenue among OECD nations.
this sounds delightful and also that it will kill me
This Paul Prudhomme-inspired pie is essentially a sweet pastry crust filled with a savory mixture of Cajun-spiced ground pork and beef. It's topped with rich seasoned cream cheese, which turns bubbly and browned in the oven—in short, it's bliss on a plate.
Borrowing all the classic flavors of a campfire s'more, the Ideas in Food team creates a graham cracker cake that's flavored with browned butter, layered with a dulce de leche-spiked chocolate mousse, and topped with a toasted bourbon-marshmallow icing.
"Welcome to my restaurant; now please pay my employees."
That's tipping in a nutshell, according to Mark Ventura, a former waiter and an economics major at Miami University. Ventura was quoted last week in an article profiling the restaurant Packhouse Meats, which opened in January in Newport, KY. The restaurant has a no-tipping policy. Signs proudly announcing the embargo are on full display in the restaurant, and the credit card slip only has a place for your signature — no extra line for gratuity.
Plenty of people have written about the indignities of the American tipping system. English author Lynne Truss once compared visiting New York to visiting the Third World: "In this great financial capital ... tips are not niceties: give a 'thank you' that isn't green and foldable and you are actively starving someone's children." The Village Voice's Foster Kamer called tipping "an assault on fairness" for everyone involved in the transaction: "It reinforces an economically and socially dangerous status quo, while buttressing a functional aristocracy," he wrote in "The Death of Tipping". Meanwhile Michael Lewis, in one of the most well-known essays on the subject, argued against it from the consumer's perspective, comparing obligatory tipping — and what sort of tipping isn't in some sense obligatory? — to a government tax: "I feel we are creeping slowly toward a kind of baksheesh economy in which everyone expects to be showered with coins simply for doing what they've already been paid to do."
And yet for some reason, the customary practice of tipping endures, and all of us who read these essays and hope they catch on continue to actively participate in the system we seem to so publicly hate. As William Scott pointed out almost a century ago in The Itching Palm, one of the first published anti-tipping screeds, "There are abundant indications of a widespread distaste for the custom but the sentiment is unorganized and inarticulate."
Here, then, is the complete case against tipping.
1) Tipping lets employers off the hook
The first and most compelling rebuttal to any case against tipping is always BUT THAT'S HOW SERVERS MAKE MOST OF THEIR INCOME.
Yes, that's right — and that's the problem. Restaurant servers' hourly wages are ridiculously low — $2.13 an hour, in fact, in most states — and they do depend on tips to account for the bulk of their income. Taking away a server's tips would put her in a bad place financially — unless her employer ups her hourly wage. As it now stands, the tipping model lets business owners make more money at the expense of their employees' hard work. But rather than let their employees grovel for tips, restaurateurs ought to be required to pay their employees a living wage.
Consumers should not be responsible for paying the incomes of a restaurant owner's employees. For one thing, it isn't fair to the consumers. But more troublingly, it isn't fair to the employees: a server's ability to pay his bills shouldn't be subject to the weather, the frequency with which he touches his guests, or the noise level of the restaurant, all of which are factors that contribute to the tip amount left by a consumer.
Tipped workers — whose wages typically fall in the bottom quartile of all U.S. wage earners, even after accounting for tips — are a growing portion of the U.S. workforce. Employment in the full-service restaurant industry has grown over 85 percent since 1990, while overall private-sector employment grew by only 24 percent. In fact, today more than one in 10 U.S. workers is employed in the leisure and hospitality sector, making labor policies for these industries all the more central to defining typical American work life.
EPI also cites research that the poverty rate of tipped workers is nearly double that of other workers (as the chart below indicates), and that tipped employees are 3 times more likely to be on food stamps.
EPI also argues it is false to suggest that "these workers' tips provide adequate levels of income and reasonable economic security," as 2014 reports from the White House and the Congressional Budget Office argued. Further, they say, research clearly shows that poverty rates are reduced in those states where the minimum wage rate for tipped workers has been raised.
2) Tipping is undemocratic
"The itching palm is a moral disease," wrote Scott in his 1916. To him, tipping was a threat to the founding principle of democracy: that all men are created equal. Allowing an American citizen (i.e. the person being tipped) to adopt the posture of a sycophant is deeply undemocratic, argued Scott, because it limits self-respect to the "governing classes" (i.e. the tippers).
According to Michael Lynn, a professor of consumer behavior at the Cornell University School of Hotel Administration, the practice of tipping originated in Europe and only later migrated to America just after the Civil War. (As for why the practice started in Europe in the first place, Kamer discusses different theories.) Wealthy Americans returning home from European vacations wanted to show off what they'd learned abroad, and so they started tipping their service workers.
Tipping, in other words, is rooted in an aristocratic tradition. It should come as no surprise that tipping took off in Europe, a continent that promoted a clear distinction between the servant class and higher forms of society. But as Scott notes, America prides itself on not distinguishing social groups bases solely on their financial means. In fact, he notes, "Tipping, and the aristocratic idea it exemplifies, is what we left Europe to escape."
Scott isn't the only one with this view. According to Yoram Margalioth of Tel Aviv University Law School, tipping in America was at first "met with fierce opposition as fostering a master-servant relationship [was] ill suited to a nation whose people were meant to be social equals." The Anti-Tipping Society was founded in 1904 in Georgia, and convinced its 100,000 members to foreswear tipping for an entire year. Labor unions, too, came out against tipping, as did the president of the American Federation of Labor, Samuel Gompers. Opposition to tipping finally got codified into law, when Washington Statepassed a no-tipping law in 1909. Five other states followed suit, though, according to Wachter, none of the laws were enforced, and as a result, all of them were repealed by 1926.
Today, tipping continues to be de rigueur in America, while, ironically, the European custom has been replaced in its home country by a service charge.
3) Tipping doesn't do what it's supposed to do
As Margialoth notes, many people view tipping "as an informal service contract between the customer and the waiter, acting as a consumer-monitoring mechanism." This informal contract reinforces the belief that customers are able to monitor the service they receive and reward it accordingly. In other words, the argument goes, tipping motivates the server to do her best work. This makes some sense at least in theory, but in reality, it's really, really wrong.
After a qualitative study of more than 2,600 dining parties at 21 different restaurants, Lynn concluded that "tips are only weakly related to service." As Margialoth notes, the most important factor to patrons deciding upon tip amounts is the amount of the check, not the efficiency, or inefficiency, of the server; the quantity of the food they order, not the quality with which it's served to them. This finding, Lynn argues, "raises serious questions about the use of tips as a measure of server performance or customer satisfaction as well as the use of tips as incentives to deliver good service." It also emphasizes the fact that tipping is really, painfully unfair: how in the world is bringing a customer a $1,000 bottle of wine any more work than bringing her a $60 bottle? If Lynn is right, and customers generally tip on amount alone, the difference between the hypothetical 20 percent gratuities would be $188 — a $200 tip versus a $12 tip.
Steve Dublanica, author of two books on the service industry, said that any server would agree with Lynn's findings:
If you've waited tables, you know this is true. I learned this on the job years ago. You can give people amazing service and they'll stiff you. You can give them horrible service, and they can give you a great tip. There's no rhyme or reason to it. If only 2 percent of the tip is based on the service, what are the other 98 percent doing? If they're not tipping on service, they're tipping on psychological processes that are happening.
Jay Porter, owner of the Linkery restaurant in San Diego, said it's "silly" to think that servers are motivated merely by prospective tips. "Servers are motivated to do a good job in the same ways that everyone else is," he wrote in Slate, noting that they're motivated by wanting to keep their jobs and earn raises, and because they take pride in their work. He added: "In any workplace, everyone is required to perform well, and tips have nothing to do with it."
Not that tipping isn't a powerful motivator. It is — just not for the employee. The thought of being able to hire labor at around two bucks an hour is probably great news to employers looking to turn profits. Again, that's problematic. (See #1.)
4) Tipping is discriminatory … and it might be illegal
The way we tip reflects our prejudices, argues Freakonomics' Stephen Dubner. Here's what he toldBrian Lehrer: "The data show very clearly that African Americans receive less in tips than whites, and so there is a legal argument to be made that as a protected class, African American servers are getting less for doing the same work. And therefore, the institution of tipping is inherently unfair."
But not only are black servers making less money than white servers — black diners are perceived to be leaving less money than white diners. Data collected in 2009 from over 1,000 servers all across the US "found that over sixty-five percent [of servers] rated African Americans as below average tippers." As a result, restaurant workers of all colors dislike waiting on black customers, studies found. The economy of tipping is so racially charged that both servers and diners are affected by prejudice.
Racism isn't the only kind of discrimination baked into the American tipping system. Female servers, too, face routine discrimination. As Lynn told Dubner: blonde, slender, larger-breasted women in their 30s earn some of the highest tips. Granted, the decision of how large a tip to leave is up to the subjective whims of the tipper, and different people have their own aesthetic preferences. But when a server's main source of income is her tips, and if those tips are regulated by the prejudices of the tippers, then a case could potentially be made that certain wage practices of restaurants are discriminatory.
This is the very case Kamer made (emphasis mine): "In 1971's Griggs v. Duke Power, the Civil Rights Act of 1964 was ruled to prohibit businesses with discriminatory practices against those protected under it, even if that effect is unintended. Tipping, which has been proven to be discriminatory, could be downright unconstitutional."
5) Tipping might be psychologically harmful
In response to the question, "Do you feel pressured to tip at a restaurant even if you feel you received bad service?" 70 percent of those polled answered "yes." Margalioth wrote, "This seems to prove the social norm of tipping is so strong that many people feel extorted to tip."
But why do we feel such an intense pressure to tip? According to Lynn, we tip in order to prevent feeling guilty or ashamed for violating the social norm of tipping: "Perhaps [the tipper] dislikes having someone disapprove of her," he says. Or maybe she's "internalized some standard of fairness that leads her to feel guilty if she does not reward the server for his efforts." Ofer H. Azar, economist and professor at Ben-Gurion University of the Negev, agrees with Lynn: "people tip because this is the social norm and, when they disobey the norm, they suffer a psychological disutility because of social disapproval, embarrassment, and feeling guilty and unfair."
There's another way tipping could take a toll on our psyches. Margalioth argues that tipping is a form of "negative externality imposed by wealthy people on the rest of society." According to Margalioth's theory, when top earners spend more money, those who earn less feel pressured to keep up, as research has shown. In other words, she suggests, middle-class and poor Americans feel like they have to be as "visibly impressive" as wealthier Americans. This pressure might be a motivating factor in tipping, she says.
The upshot of this research is summed up by Lynn: "I think it's quite possible that tipping norms undermine overall satisfaction or happiness."
6) Tipping is not really charitable
Arguing that we do away with tipping seems like a mean thing to do: the world needs more charity, thank you, so you should keep tipping your server. But the problem with this argument is that leaving a gratuity is not actually charitable.
The word "gratuity" comes from a word meaning "gift." But that word doesn't really make sense in the context of tipping, which is, of course, a quid pro quo arrangement. You don't gift the waiter money, you release funds to him that he, by virtue of simply being your server, has earned. He is rightfully entitled to that money, and you are ethically obligated to give him by social norms that seem to be as binding as any government law.
Scott sees tipping as "misguided generosity." While we are right to feel gratitude for those serving us, he argues we go awry when we feel obligated to express our "appreciation in terms of money." After all, notes Scott, "Self respect is satisfied with verbal appreciation."
Of course, verbal appreciation won't pay the bills of tipped workers, almost 13 percent of whom live in poverty. But rather than satisfy our consciences with trivial thoughts about how tips are really charitable, we should start holding restaurant owners accountable for their employees' wages. If they argue that servers actually like the tipping system because they come out on top, we should ask these owners to put their money where their mouths are and cut their own pay down to two bucks an hour.
Sweet potatoes started out as a way of stretching expensive refined flour in biscuit doughs for those who couldn't afford otherwise, but they're not just an economical step: They create moist, flavorful biscuits that are even more likely to be tender, because some of that sweet potato replaces what would otherwise be wheat gluten. Here are the steps to make them.
You've no doubt noticed that organic foods are a fair bit more expensive at the grocery store. An organic head of lettuce can cost twice as much as a regular one. But is it any healthier for you?
In recent years, most scientists have answered this question with a flat "no." There simply doesn't seem to be much evidence that organic foods are more nutritious than conventional foods.
In 2009, the United Kingdom's Food Standards Agency reviewed 67 studies on this topic and couldn't find much difference in nutrient quality between the two food types. In 2012, a larger review of 237 studies published in the Annals of Internal Medicine also found that organic foods didn't appear to be any healthier or safer to eat than their conventionally grown counterparts.
But there have long been dissenters who argue that there must be some health benefits to organic. And a July 2014 study in the British Journal of Nutrition, led by Carlo Leifert of Newcastle University, reopened this debate by adding a small twist. The researchers' reviewed 347 previous studies and found that certain organic fruits and vegetables had higher levels of antioxidants than conventionally grown crops.
Unfortunately, this doesn't prove very much by itself. No one knows if those moderately higher levels of antioxidants actually boost your health. For that to happen, they'd have to be absorbed into your bloodstream and distributed to the right organs — and there just hasn't been much good research showing that. For now, there's little evidence to suggest concrete health benefits from eating organic.
In the meantime, some commentators have suggested that this endless health debate has become a distraction. Marion Nestle of New York University argues that the best reasons to buy organic produce involve environmental impacts and production values. Any nutritional benefit is a "bonus," if there even is one.
Other experts point out that most Americans don't eat enough fruits or vegetables of any type — a far more pressing health concern than whatever minor differences may exist between organic and conventional food. "What's missing in this debate is the important fact that the best thing consumers can do is to eat lots of fruits and vegetables, period, regardless of whether they are produced organically or conventionally," says Carl Winter of the University of California Davis. (He's also skeptical, by the way, that organic food is any healthier for you.)
Here's an overview of this often-contentious topic:
It's not easy to compare organic and conventional foods
One major hurdle for anyone trying to compare "conventional" and "organic" foods is that these are incredibly broad terms.
In the United States, there's technically a dividing line between the two: farms certified as "organic" by the USDA are prohibited from using synthetic pesticides, petroleum-based fertilizers, or sewage sludge. Organic animals can't be fed antibiotics or growth hormones.
But that still leaves a lot of room for variation. Some conventional farms go heavy on the synthetic fertilizer and pesticides. But others spray more selectively or use alternative pest management techniques.
Likewise, some organic farms use natural pesticides that are considered "organic" but can nonetheless be quite toxic. And some organic farms use compost that can contain more contaminants like lead or cadmium than conventional fertilizer. It all depends on the situation — there's no single "conventional" farming system or single "organic" system.
What's more, there are endless variables that can affect the nutritional value of crops, from soil type to climate conditions to the crop cultivars being planted. Controlling for all these factors to make a grand statement about "organic" versus "conventional" farming is incredibly difficult.
And, not surprisingly, scientists have struggled to find clear nutritional differences so far. One 2013 study found that organic tomatoes have more vitamin C, but they're also smaller than conventional tomatoes, so the differences are fairly minimal. Another 2013 study found that organic milk in the US contained more omega-3s — though this may be more about the types of feed used than a unique property of "organic" farming.
More recently, researchers have been conducting large meta-analyses — studies of studies — to try to pinpoint some big-picture lessons here. To date, those reviews have usually found little nutritional difference between organic and conventional produce (see here and here). But now comes a new study with a slight dissent.
A big 2014 study claimed possible health benefits for organic produce…
John Williamson holds a handful of flax seed December 13, 2012, on his 200-acre organic farm in North Bennington, Vermont. (Robert Nickelsberg/Getty Images)
In a big meta-analysis done in 2014, Leifert and his colleagues reviewed 347 studies comparing organic and conventional produce around the world. They concluded that organic fruits and vegetables had, on average, higher levels of antioxidants and lower levels of synthetic pesticide residue.
What's not clear, however, is whether these differences have any actual health impact on human beings. And there were a few sharp criticisms of the study. Let's take a closer look at the paper's findings:
1) Organic produce, on average, had higher levels of antioxidants. This is the part of the study that got the most attention from the media. On average, the authors found, organic produce had higher levels of flavonoids, phenolic acids, anthocyanins, and carotenoids — in some cases, 20 to 40 percent higher.
These compounds — referred to as "antioxidants" — are essentially plant defenses, produced when the plants are stressed by their environment. So one possibility is that organic crops create more of them since they're not protected by chemical pesticides and have to deal with more pests.
But there's a catch: we don't really know whether these compounds improve people's health. We don't know how many of these extra antioxidants are actually absorbed by humans. We don't know what the optimal level of antioxidant intake actually is. It's true that some studies haveshown that a diet rich in fruits and vegetables can protect against disease. But those studies looked at people mostly eating conventionally grown vegetables — and the precise role of antioxidants is still being debated.
The University of Washington's Charles Benbrook, a co-author of that 2014 pro-organic study, noted this point in a blog post: "Our team, and indeed all four reviews, acknowledges that many questions remain about the bioavailability of plant-based antioxidants, how necessary they are at different life stages, and how inadequate intakes shift the burden of disease." He added that there were some reasons to think those antioxidants are beneficial, but it's hard to say for sure.
2) Organic grains, on average, had 48 percent lower cadmium levels. Cadmium is a heavy metal that is taken up by plants in the soil and is harmful to humans in very high doses. So at first glance this seems like a point in favor of organic.
Yet it's hard to see why organic farming per se would lead to lower cadmium levels — this may just reflect differences in various soils. (Crops from some organic farms can be quite heavy in cadmium.)
It's also not clear this is a pressing health concern. The EPA says the average American gets 0.0004 micrograms of cadmium per kilogram of body weight per day from food — 10 times lower than levels that would cause kidney damage. If you want to reduce your cadmium intake, focus first on quitting smoking and eating less shellfish. Those are bigger sources.
3) Organic fruits and vegetables had less synthetic pesticide residue. This shouldn't be too surprising — synthetic pesticides aren't used on organic farms, and the study didn't test for organic pesticides (which can themselves be quite toxic). Still, some experts are unconvinced that pesticide residue is a big problem either way.
"From my 27 years of research on pesticides and food safety, I remain skeptical that the extremely low levels of pesticide residues we encounter from foods have any impact on public health, and slightly lowering such levels even more would not have any additional impact," Carl Winter, a pesticide and risk assessment specialist at the University of California Davis, wrote to me in an email. "Our typical exposure to pesticide residues is at levels 10,000 to 10,000,000 times lower than doses that cause no observable effect in laboratory animals that are fed pesticides daily throughout their entire lifetimes." (Here's some of his research on that.)
4) Organic produce had lower levels of protein, fiber, and nitrates. This was another finding that didn't get as much attention and might actually be a point in favor of conventional produce, as Tom Sanders, a nutritional scientist at King's College London, points out. Note, however, that there's still some debate over whether higher or lower levels of nitrates in vegetables are preferable.
Yet the pro-organic study also attracted some criticism
As with all big studies on a contentious topic, Leifert's pro-organic study also received a fair bit of criticism — you can see a roundup here. A few points made:
1) The health benefits of those antioxidants are still uncertain. "There is no evidence provided that the relatively modest differences in the levels of some of these compounds would have any consequences (good or bad) on public health," said Richard Mithen of the Institute for Food Research. He added this twist: "The additional cost of organic vegetables to the consumer and the likely reduced consumption would easily offset any marginal increase in nutritional properties, even if they did occur, which I doubt."
2) The analysis may have included too many low-quality studies. Alan Dangour — the scientist who led the 2009 review that found no significant differences between conventional and organic food — argued that Leifert likely included too many low-quality studies in his review. Leifert shot back that Dangour's own study excluded too many studies. This is often a point of contention when dealing with meta-analyses.
3) Comparisons between "organic" and "conventional" may be inherently flawed. And still other experts reiterated the point made above that it's difficult to compare "organic" to "conventional" farming because practices vary so widely. For instance: on average, cadmium levels may be lower in organically grown cereal crops. But some organic farms use compost that's extremely high in cadmium.
To make this even trickier, the Leifert review surveyed studies across the entire world — 70 percent of the studies were in Europe, with the rest in the United States, Canada, Brazil, and Japan. There may well be regional variations within that average.
Some commentators have suggested that the debate over whether organic or conventional food is healthier is becoming increasingly useless.
Back in 2009, food writer James McWilliams pointed out that only about 2.5 percent of food eaten in the United States is organic — and the typical consumers tend to be college-educated and fairly well-off. That means we're quibbling over marginal nutritional differences (if any) for a population that's already fairly healthy.
By contrast, about 73 percent of the US population doesn't eat the recommended five or more servings of fruits and vegetables each day. For many nutritionists, that's a much more pressing concern — fixing that shortfall would swamp any health benefits organic food might have.
Indeed, a few experts wonder if the endless debate over organic versus conventional might even be counterproductive: "I worry that some consumers might actually reduce their consumption of fruits and vegetables because of pesticide residue concerns," notes Winter, "which would do them more harm than good."
Even some proponents of organic food have suggested that the nutrition question is a bit of a sideshow. On her blog, Marion Nestle argues that the case for buying organic produce hinges more on how our food is produced and concern for the environment: "As I said, if they are more nutritious, it's a bonus, but there are plenty of other good reasons to prefer them."
Nestle doesn't list those reasons, but proponents often cite things like less fertilizer runoff and pollution or fewer antibiotics being used in farms or less pesticide exposure for farmworkers.
Still, the Guardianrecently cited one survey suggesting that at least 55 percent of organic buyers list "healthy eating" as a reason for purchasing. So it's unlikely this debate will go away anytime soon — and it'll remain of keen interest to a lot of people.
Wealthy donors with business ties to the city keep giving to the mayor.
by Mick Dumke and Ben Joravsky
July 10 was another productive day for Mayor Rahm Emanuel's fund-raising machine. Chicago Forward, the political action committee put together by some of the mayor's friends and run by his former aides, reported collecting $325,000 in contributions that day from just six people.…
The only problem? The test is completely meaningless.
"There's just no evidence behind it," says Adam Grant, an organizational psychologist at the University of Pennsylvania who's written about the shortcomings of the Myers-Briggs previously. "The characteristics measured by the test have almost no predictive power on how happy you'll be in a situation, how you'll perform at your job, or how happy you'll be in your marriage."
The test claims that based on 93 questions, it can group all the people of the world into 16 different discrete "types" — and in doing so, serve as "a powerful framework for building better relationships, driving positive change, harnessing innovation, and achieving excellence." Most of the faithful think of it primarily as a tool for telling you your proper career choice.
But the test was developed in the 1940s based on the totally untested theories of Carl Jung and is now thoroughly disregarded by the psychology community. Even Jung warned that his personality "types" were just rough tendencies he'd observed, rather than strict classifications. Severalanalyses have shown the test is totally ineffective at predicting people's success in various jobs, and that about half of the people who take it twice get different results each time.
Yet you've probably heard people telling you that they're an ENFJ (extroverted intuitive feeling judging), an INTP (introverted intuitive thinking perceiving), or another one of the 16 types drawn from Jung's work, and you may have even been given this test in a professional setting. Here's an explanation of why these labels are so meaningless — and why no organization in the 21st century should rely on the test for anything.
The Myers-Briggs rests on wholly unproven theories
Carl Jung in 1960. (Douglas Glass/Paul Popper/Popperfoto/Getty Images)
In 1921, Jung published the book Psychological Types. In it, he put forth a few different interesting, unsupported theories on how the human brain operates.
Among other things, he explained that humans roughly fall into two main types: perceivers and judgers. The former group could be further split into people who prefer sensing and others who prefer intuiting, while the latter could be split into thinkers and feelers, for a total of four types of people. All four types, additionally, could be divided based on attitudes into introverts and extroverts. These categories, though, were approximate: "Every individual is an exception to the rule," Jung wrote.
Even these rough categories, though, didn't come out of controlled experiments or data. "This was before psychology was an empirical science," says Grant, the Penn psychologist. "Jung literally made these up based on his own experiences." But Jung's influence on the early field was enormous, and this idea of "types" in particular caught on.
Jung's principles were later adapted into a test by Katherine Briggs and her daughter Isabel Briggs Myers, a pair of Americans who had no formal training in psychology. To learn the techniques of test-making and statistical analysis, Briggs worked with Edward Hay, an HR manager for a Philadelphia bank.
They began testing their "Type Indicator" in 1942. It copied Jung's types but slightly altered the terminology, and modified it so that people were assigned one possibility or the other in all four categories, based on their answers to a series of two-choice questions.
Raise two (the number of possibilities in each category) to the fourth power (the number of categories) and you get 16: the different types of people there apparently are in the world. Myers and Briggs gave titles to each of these types, like the Executive, the Caregiver, the Scientist, and the Idealist.
The test has grown enormously in popularity over the years — especially since it was taken over by the company CPP in 1975 — but has changed little. It still assigns you a four-letter type to represent which result you got in each of the four categories:
With most traits, humans fall on different points along a spectrum. If you ask people whether they prefer to think or feel, or whether they prefer to judge or perceive, the majority will tell you a little of both. Jung himself admitted as much, noting that the binaries were useful ways of thinking about people, but writing that "there is no such thing as a pure extravert or a pure introvert. Such a man would be in the lunatic asylum."
But the test is built entirely around the basis that people are all one or the other. It arrives at the conclusion by giving people questions such as "You tend to sympathize with other people" and offering them only two blunt answers: "yes" or "no."
It'd be one thing if there were good empirical reasons for these strange binary choices that don't seem to describe the reality we know. But they come from the disregarded theories of an early-20th-century thinker who believed in things like ESP and the collective unconscious.
All four of the categories in the Myers-Briggs suffer from these kinds of problems, and psychologists say they aren't an effective way of distinguishing between different personality types. "Contemporary social scientists are rarely studying things like whether you make decisions based on feelings or rational calculus — because all of us use both of these," Grant says. "These categories all create dichotomies, but the characteristics on either end are either independent from each other, or sometimes even go hand in hand." Even data from the Myers-Briggs test itself shows that most people are somewhere in the middle for any one category, and just end up being pigeonholed into one or the other.
This is why some psychologists have shifted from talking about personality traits to personality states — and why it's extremely hard to find a real psychologist anywhere who uses the Myers-Briggs with patients.
There's also another related problem with these limited choices: look at the chart above, and you'll notice that words like "selfish," "lazy," or "mean" don't appear anywhere. No matter what type you're assigned, you get a flattering description of yourself as a "thinker,""performer," or "nurturer."
This isn't a test designed to accurately categorize people, but rather a test designed to make them feel happy after taking it. This is one of the reasons it's persisted for so many years in the corporate world after being disregarded by psychologists.
The Myers-Briggs provides inconsistent, inaccurate results
(Frederick Florin/AFP/Getty Images)
Theoretically, people might still get value out of the Myers-Briggs if it accurately indicated which end of a spectrum they were closest to for any given category.
But the problem with that idea is the fact that the test is notoriously inconsistent. Research has found that as many as 50 percent of people arrive at a different result the second time they take a test, even if it's just five weeks later.
That's because the traits it aims to measure aren't the ones that are consistently different among people. Most of us vary in these traits over time — depending on our mood when we take the test, for instance, we may or may not think that we sympathize with people. But the test simply tells us whether we're "thinking" or "feeling" based on how we answered a handful of binary questions, with no room in between.
Another indicator that the Myers-Briggs is inaccurate is that several different analyses have shown it's not particularly effective at predicting people's success at different jobs.
If the test gives people such inaccurate results, why do so many still put stock in it? One reason is that the flattering, vague descriptions for many of the types have huge amounts of overlap — so many people could fit into several of them.
This is called the Forer effect, and is a technique long used by purveyors of astrology, fortune telling, and other sorts of pseudoscience to persuade people they have accurate information about them.
The Myers-Briggs is largely disregarded by psychologists
All this is why psychologists — the people who focus on understanding and analyzing human behavior — almost completely disregard the Myers-Briggs in contemporary research.
Search for any prominent psychology journal for analysis of personality tests, and you'll find mentions of several different systems that have been developed in the decades since the test was introduced, but not the Myers-Briggs itself. Apart from a few analyses finding it to be flawed, virtually no major psychology journals have published research on the test — almost all of it comes in dubious outlets like The Journal of Psychological Type, which were specifically created for this type of research.
CPP, the company that publishes the test, has three leading psychologists on their board, but none of them have used it whatsoever in their research. "It would be questioned by my academic colleagues," Carl Thoresen, a Stanford psychologist and CPP board member, admitted to the Washington Post in 2012.
Apart from the introversion/extroversion aspect of the Myers-Briggs, the newer, empirically driven tests focus on entirely different categories. The five-factor model measures people's openness, conscientiousness, extroversion, agreeableness, and neuroticism — factors that do differ widely among people, according to actual data collected. And there's some evidence that this scheme may have some predictive power in determining people's ability to be successful at various jobs and in other situations.
One thing it doesn't have: the marketing machine that surrounds the Myers-Briggs.
The Myers-Briggs is useful for one thing: entertainment. There's absolutely nothing wrong with taking the test as a fun, interesting activity, like a BuzzFeed quiz.
But there is something wrong with CPP peddling the test as "reliable and valid, backed by ongoing global research and development investment." The company makes an estimated $20 million annually, with the Myers-Briggs as its flagship product. Among other things, it charges between $15 and $40 to each person who wants to take the test, and $1,700 to each person who wants to become a certified test administrator.
Why would someone pay this much to administer a flawed test? Because once you have that title, you can sell your services as a career coach to both people looking for work and the thousands of major companies — such as McKinsey & Co., General Motors, and a reported 89 of the Fortune 100 — that use the test to separate employees and potential hires into "types" and assign them appropriate training programs and responsibilities. Once certified, test administrators become cheerleaders of the Myers-Briggs, ensuring that use of the outdated instrument is continued.
If private companies want to throw their money away on the Myers-Briggs, that's their prerogative. But about 200 federal agencies reportedly waste money on the test too, including the State Department and the CIA. The military in particular relies heavily on the Myers-Briggs, and the EPA has given it to about a quarter of its 17,000 employees.
It's 2015. Thousands of professional psychologists have evaluated the century-old Myers-Briggs, found it to be inaccurate and arbitrary, and devised better systems for evaluating personality. Let's stop using this outdated test — which has about as much scientific validity as your astrological sign — and move on to something else.
Correction: This piece previously stated that the military uses the Myers-Briggs for promotions in particular, rather than using it as a general tool.
As much as I now love real-deal Sichuan kung-pao chicken, my absolute favorite Chinese dish as a kid was this mildly spiced Americanized version—and to be honest, I still love it today. Just because it's a Chinese-American standard, complete with slightly-gloppy-sauce and mild heat doesn't make diced chicken with peppers and peanuts any less delicious. Here's how to make it at home.
College graduates in the class of 2008 had it rough. They started college when the economy was thriving and took on more student loan debt than anyone before them.
Then, they graduated just as the Great Recession rushed in. The Class of 2008 was blindsided by an economic reality that they hadn't planned on and weren't prepared to handle.
Back in 2009, a representative survey of American four-year college graduates found they had a 9 percent unemployment rate.
Now that same survey, conducted by the Department of Education, has caught up with the Class of 2008 again. The results are a little more hopeful this time around — but they show that the class of 2008 is still lagging when compared with college graduates as a whole. The economic scars from graduating into a recession are deep, even five years later.
1) Unemployment rates are better, but they're still higher than for college graduates overall
Over all, 6.7 percent of the Class of 2008 was unemployed in 2012, the survey of 17,000 graduates found. That's a big improvement over 2009's 9 percent unemployment rate, and it was below the national average that year.
But that doesn't mean it's good news. The unemployment rate is supposed to be lower than average for college graduates. The reason a college degree remains a valuable investment, even as student debt rises, is that it's supposed to help you in the labor market.
Four years after graduating, relatively recent graduates hadn't caught up with other adults — even the group closest to their age. Adults aged 25 to 34 with at least a bachelor's degree had an unemployment rate of 4 percent in 2012.
2) In real terms, salaries were lower in 2012 than they were for young adult college graduates a decade earlier
Just over two-thirds — 71 percent — of the Class of 2008 were working full-time, at least 35 hours per week, in 2012. But those full-time workers were making an average annual salary of $52,500.
The median salary was lower: $46,000. This indicates that the recession's first graduating class took a salary hit as well — that's 8 percent less, adjusted for inflation, than college graduates aged 25 to 34 earned in 2002.
3) For health care majors, the economy is great. For social science majors, the recession never ended.
Just how much the recession hurt depends on what graduates majored in. The economic picture looks rosy for 2008 graduates with a degree in a health care field: they had an unemployment rate of 2.2 percent in 2012. In other STEM fields, the unemployment rate was 5 percent.
But even in 2012, social science majors were near double-digit unemployment; their rate was 9.6 percent. Humanities majors weren't far behind, at 9 percent. That's slightly better than the 13 percent unemployment rate they had in 2009. But it's not a healthy job market by any means.
4) Older students and for-profit college graduates might not have gotten the labor market boost they hoped for
Students in their 30s who earn bachelor's degrees, or students who go to for-profit colleges, are often the ones who see education as economically transformative. They're not enrolled in college to live in the dorms and have a coming-of-age experience; they're there because they want an education or need a degree.
Unfortunately, they're also the graduates who fared the worst. It's hard to say why — they might have attended less prestigious colleges overall, or have come from disadvantaged backgrounds, or have other strikes against them that the survey doesn't measure. But the unemployment rate for people who graduated after age 30 (who were at least 34 in 2012, when the survey was administered) was 9.6 percent.
For-profit colleges had a higher rate than public or nonprofit colleges, at 11.9 percent. But graduates from for-profit colleges who did find full-time work were earning a higher salary — $51,000 per year — than graduates from other sectors of higher education.
As China and India continue their fairly rapid paces of economic growth, a greater and greater share of extreme poverty is going to be concentrated in sub-Saharan Africa. But if we're going to make progress there, we need to have good numbers about how various economies are faring, how income is distributed within them, and so forth.
The trouble, Simon Fraser University economist Morten Jerven argues, is that those numbers are often incomplete at best and downright false at worst. It's a problem that came into sharp relief recently when Nigeria "rebased" its GDP numbers, doubling its GDP in the process.
Jerven and I spoke on the phone about his book on the topic, Poor Numbers, the extent of the problem, and how to fix it. If you're interested in learning more, the Center for Global Development's Amanda Glassman and Alex Ezeh have a great new report on improving data quality. Columbia's Chris Blattman had a smart post on why it might make sense for African governments to prioritize things other than improved data collection, and Slate's Josh Keating had a helpful round-up of the debate here.
Dylan Matthews: What are the basic problems with numbers in the countries you study?
Morten Jerven: We often take GDP as a given, objective fact, much as we think about, if you're listening to the weather report, the meteorologist can measure air pressure, temperature, and the speed of the wind, and use these physical observations. And we tend to move along thinking of inflation, unemployment, and economic growth as objective metrics. But they aren't.
To arrive at GDP per capita, for example, you have to add up all the goods and services used in one country in any given year and then compare it with last year's and control for price changes and so forth. That's complicated enough in a country like the US, but you still have some helpful things. The US government collects taxes, so that means you know income for persons. It also collects corporate taxes, so you know the income of corporations. It collects labor statistics, so it has a lot of information and it's the task of the statistical bureau to aggregate it.
In the countries I study, the ones in sub-saharan Africa, getting that kind of information is much more expensive and much more cumbersome because a lot of economic activity goes by unrecorded. Yet we still need this data to rank countries. We'd still like to analyze whether Ghana is richer than Nigeria or whether Tanzania has done better over the past decade than Kenya has. We can use that to make policy recommendations, that the World Bank should fund this project rather than that one, or that Burkina Faso should get support for poverty reduction whereas Uganda no longer has such a problem, for example.
The major takeaway I have in my book is that the numbers that we use to make these decisions are poor. That's why I call the book Poor Numbers. We know much less about income and growth in African countries and in poor countries generally than we would like to think.
DM: What sorts of things are causing the numbers to be so poor? Is it mostly an issue of corruption, that you can't trust the government agencies putting them out, or is it an infrastructure problem, that they're difficult to collect?
MJ: You can think of the problems of official statistics as being partly a simple knowledge problem, a lack of recording, and partly a problem of politicization and political tampering and so forth. And we know that the political tampering problem is not particular to poor countries. The scandal in Greece showed that debt data was understated as a share of GDP, there has been lots of disagreement over inflation statistics coming out of Argentina. So the political problem applies universally. What I stress is my book is there's a particular knowledge problem relating to poor countries which is more fundamental than we tend to think.
It's a combination of two factors. One is that the economic transactions are small and go on in areas that aren't that accurately supervised, where a lot of individuals aren't taxed and a lot of the property is not registered, and a lot of the businesses aren't registered, and you don't have formal contracts. There's simply a lack of recording. The second is that a country like Tanzania for instance, the state institutions are less resourced and less equipped to undertake surveys to get that kind of information. If you want information about food production in rural areas or expenditure patterns for poor people in big cities, you have to go to peoples' houses and ask what they earn and what they spend money on. So you need to do a survey or census, and to do so is expensive. African countries have had scarce resources over the past two decades, and so statistical systems have become relatively underfunded.
One of the headline things that shows this very clearly, about lacking resources to update statistical systems, occurred recently in Nigeria. In April, on a Sunday, the director of statistics in Nigeria announced new GDP numbers for Nigeria, and it was quite surprising because it turned out that the new numbers showed that GDP had doubled compared to the old numbers in use just the day before. What happened is that, in Nigeria, they have not updated what is called the benchmark for GDP estimations since 1990. Almost a quarter of a century had passed since Nigeria changed the sources and methods of how they estimate the size of the economy. This change alone meant that Nigeria suddenly became bigger than the South African economy, and the total GDP of Africa increased by 15 to 20 percent. The previously unrecorded economic activity in Nigeria was 58 times size of the Malawian economy. The scale of uncertainty regarding short and medium trends in economic growth is quite big in magnitude.
We know much less about income and growth in African countries and in poor countries generally than we would like to think
DM: How much do you think collecting this kind of data should be a job for individual countries as opposed to the World Bank or IMF or other development institutions, which might be more trustworthy in some peoples' eyes?
MJ: That's a common misunderstanding, that somehow IMF or World Bank data is better than national data, and it's striking that even scholars go around thinking, "I won't use government data from Ethiopia or Sudan, that data must be manipulated, or it's not based on good sources. I'd rather use IMF/World Bank data instead." The fact is that the IMF and Would Bank don't have resources to produce their own data. They collect, through formal and informal channels, official data, which they harmonize and then disseminate. The World Bank and IMF are simply data retailers.
So there's an important question, which I don't think has been dealt with adequately, of what the responsibility of the international community is in disseminating this data. Do they have the right to adjust country data? Should they warn data users more clearly about the inherent deficiencies of this data? It can make a quite big difference. The comparison between Nigeria and Sudan might be quite misleading. That goes back to what I was saying, this illusion about these being objective facts. if you get them in one Excel sheet, they seem to be equivalent facts, but in reality they are observations which vary quite radically in terms of precision.
the IMF and Would Bank don't have resources to produce their own data. They are simply data retailers
DM: How seriously should we take trends that emerge from this data about the overall performance of poor countries? I think the consensus is that we're making steady progress against extreme poverty in sub-Saharan Africa. How sure are we about that?
MJ: We lend too much credence to these aggregated trends. We don't really have a good estimate of how large our knowledge bubble is. One of the problems we have is there are big blind spots. When you get aggregated data on economic growth, it's driven by changes in the very big countries, it's driven by economic change in urban centers, in exports, in foreign direct investment. Our knowledge problem is doubly biased: we know less about poor economies, and we know less about the poor people in those poor economies.
There's a lot of invisible items in official statistics. We know less about food, we know less about rural populations. We know much more about the kind of goods that are traded between capital cities in the South and capital cities in the North, but we know very little about the extent of trade that goes between Uganda and Sudan, across their borders. There are some things we observe and some things we don't.
Because everyone wants an Africa-wide number, that means that this problem is accentuated. For 2010-2013, a lot of small countries haven't even prepared their estimates yet. and so you use the data from reporting countries and extrapolate to cover the rest. We talk about trends in poverty and inequality based on surveys, but in countries like the Democratic Republic of Congo, which is a very big country, there is no poverty survey, there is no survey of health or income. Sudan is lacking. Angola, we don't have data on. And we tend to use data from Ghana, Senegal, and Tanzania and extrapolate those trends to be valid across the continent. That's worrying, in the sense that we might be mislead by these development statistics if we lend too much credibility to them.
DM: What would advice to policymakers in these countries be? Should they invest in better statistical systems? Should they use a larger number of indicators to get a fuller picture?
MJ: First off, it's not always true that policymakers benefit from better data. Sometimes having a big fluffy unknown size of the economy can be useful to them; ignorance can be bliss. If you update certain statistics, poverty might actually be more widespread than it used to be. With better statistics, you sometimes get more bad news. We have to think about the political economy of statistics. It is in the interest of people who are choosing what government to elect to know if the government has been successful in reducing poverty and creating employment and so forth. Clearly central banks would be interested in having data on the real sectors of the economy when deciding whether to lower or increase the interest rate.
What is important is that politicians and international organizations give independence to statistical institutions to freely collect and disseminate what they believe are justifiable facts based on statistical techniques of sampling and surveying. But understandably, that's not always what happens. We're in such a hurry, in the international donor and development community, to get facts that we don't invest in credibly produced facts. A big trend is to have "evidence-based" policies, or "paying for results" where you'd say, "We'll only fund primary school education if it can show improvements." But people forget to set up a system to actually measure that. So often what we get is policy-driven evidence rather than evidence-driven policy.
We're in such a hurry, in the international donor and development community, to get facts that we don't invest in credibly produced facts
DM: What should international organizations be doing to fix this? Should they be more straightforward about the numbers they put out there? Should they be actively trying to coordinate to make sure everyone's using similar numbers? What do you think their proper role in this is?
MJ: i think it might be unhealthy to keep publishing projections as if they were data. It might be unwise to just fill out gaps in data sets with extrapolations, because it creates this air of accuracy when there are in fact lots of holes in our information. If this is forthrightly stated, it will be easier for scholars, journalist, and politicians to navigate and to see the holes in the information, as opposed to giving this "we know everything about everything" impression.