Shared posts

28 May 19:30

You wish your yearbook quote was as cool as this feminist teen's

by Julia Jester

A high school yearbook quotation is the perfect opportunity for graduates to smash the patriarchy and combat homophobia in one fell swoop, which is what Caitlyn Cannon chose to do:

this is all I honestly care about right now

— lil snowball (@casualnosebleed) May 26, 2015

The 17-year-old self-proclaimed feminist said she tweaked a quote she found on Tumblr, and told The Huffington Post in an email, "I was tired of seeing the same old quotes from popular books and movies and authors, and I wanted to call attention to a problem that women face. I've never really been ashamed to say that I am gay, so the LGBT aspect was simply who I am."

She only just graduated from high school, but she already recognizes the challenges women face regarding equal pay

There are a variety of ways and metrics to measure equal pay, and the exact difference is disputed. However, the existence of a gender wage gap is not.

Based on recent data, the average year-round, full-time woman worker earned 78 cents for every dollar her male counterpart earned (not 75 cents, as Caitlyn states). When looking at weekly wages, however, the Labor Department measures that women make 82 percent of what men make. Based on the rate we're going, the US won't close the gender wage gap until 2058.

Unfortunately for Cannon and her female classmates, increased education doesn't fix the problem:

LGBT rights are as much about the question of equality in America as the fight for equal pay

Although LGBT Americans still face widespread workplace discrimination, social acceptance of LGBT people in the US is at a record high, with 60 percent of Americans now supporting the legalization of same-sex marriages. That Cannon laced together two pressing social issues in such a short space — LGBT rights and equal pay — speaks to the changing values and attitudes of younger Americans as much as it does to Cannon's creativity.

Cannon says she needs feminism. Feminism could use more champions like Cannon, too.

26 May 16:40

Millions of people will go hungry today, but Washington is dumping apples in fields

by Margarita Noriega

Washington is turning into the bad apple state, as growers are dumping "millions of pounds" of the state's official fruit in open fields, where the fruit is left to rot.

Local NBC affiliate station King 5 took video of what experts call the biggest apple dump in the state's history. Video shows a dump truck creating long rows of fresh apples on grassy hills. Locals told King 5 the trucks come multiple times a day:

The apples slowly turn entire fields red, attracting too many flies to count:

The dumping of the state's official fruit is a reminder of the complex tendencies of free markets. There are various factors encouraging growers to discard produce otherwise meant for human consumption, including a surplus of apples and ongoing labor disputes in Western cities, including those at the ports of Tacoma and Seattle.

It's shocking to see the loss of healthy, inexpensive produce happening at a scale that would turn green fields red. The USDA says more than 47 million Americans had trouble accessing food in 2012. Even in a country where food deserts still limit access to fresh produce, these growers have the opposite problem — no logistical infrastructure to get their crops into the hands of consumers.

28 May 01:57

Scientists combine evolutionary biology and mathematics to reverse antibiotic resistance [The Pump Handle]

by Kim Krisberg

For more than a decade, biologist Mariam Barlow has been working on the theory that administering antibiotics on a rotating basis could be a solution to antibiotic resistance. After years of research, Barlow had lots of data, but she needed a more precise way to make sense of it all — something that was so specific it could easily be used to treat patients. So, she joined forces with a team of mathematicians. And the amazing results could help solve an enormous, worldwide problem.

In a nutshell, the team of biologists and mathematicians developed a software program that generates a road map to reverse the evolution of antibiotic resistance in bacteria. In a study published earlier this month in the journal PLOS ONE, they unveiled a mathematical model that pinpoints optimal antibiotic cycling patterns with the highest probability of turning back the evolutionary clock of antibiotic resistance. Barlow, an evolutionary biologist and associate professor at the University of California-Merced School of Natural Sciences, told me that she and fellow researchers found cycles of antibiotics that could reverse resistance and drive bacteria back to a state observed in the 1960s — a state the researchers call the “wild type state.” So it’s not surprising that the software that makes it all possible was aptly named “Time Machine.”

“It makes sense that we would look for answers (to antibiotic resistance) in evolutionary biology,” Barlow told me. “Bacteria are so good at evolving — and they’ll probably find new ways we don’t even know about yet — but based on what we’ve seen, this is something we can deal with. Antibiotic resistance is something we can handle.”

The research comes at a time of widespread concern that without a coordinated, well-funded response to growing antibiotic resistance, medicine could lose some of its most effective, life-saving tools. Every year, according to the Centers for Disease Control and Prevention, about 2 million people become infected with antibiotic-resistant bacteria and at least 23,000 people die as a result of such infection. For example, last year, CDC described the development of antibiotic-resistant gonorrhea as an “urgent public health threat,” warning that we may run out of options for treating the sexually transmitted disease. Also at the federal level, the White House recently released its first “National Action Plan for Combating Antibiotic-Resistant Bacteria” and in his fiscal year 2016 budget proposal, President Obama recommended doubling federal funds to find solutions to antibiotic resistance to $1.2 billion (of course, that recommendation has to get through Congress). Everyone agrees that antibiotic resistance is a huge problem with fatal consequences.

Thankfully, Barlow and her colleagues may have found an answer that not only tackles resistance, but helps preserve the effectiveness of existing antibiotics. And here’s how they did it. Researchers created bacteria in the lab, exposed it to 15 different antibiotics and measured their growth rates. Using those measurements, the team of biologists and mathematicians computed the probability of mutations required to return the bacteria back to its harmless state. They tested up to six antibiotics in rotation at a time and computed thousands of measurements to find the most likely cycling strategies for reversing the development of antibiotic resistance.

Researchers eventually concluded that the Time Machine software and its mathematical foundations proved to be a promising way to quickly and more precisely generate an optimal antibiotic cycling plan most likely to reverse resistance. Simply put, the software is a matchmaker — it computes which antibiotic goes with which mutation at which point in time to best manage the evolution of resistance and ultimately, cure the patient.

And Barlow reminded me that this type of precision is not necessarily the norm in medicine. Even though incorrect antibiotic prescribing is a main driver of antibiotic resistance, the current dearth in rapid antibiotic-resistant diagnostics means physicians are often left with little choice but to start treating an ill patient before seeing any lab results. However, antibiotic cycling can help guide that process to make a better, more ordered treatment plan — “it’s an approach that can be accessible to any hospital and can help empiric therapy be more reliable,” Barlow said.

In the PLOS ONE study, authors Barlow, Kristina Crona, Portia Mira, Devin Greene, Juan Meza and Bernd Sturmfels write:

Efforts to remove resistance genes from clinical environments by either discontinuing or reducing the use of specific antibiotics for some period of time, either through general reduction of antibiotic consumption or periodic rotations of antibiotics (cycling) have not worked in any reliable or reproducible manner; indeed it would have been surprising if they had worked.

Since antibiotic resistance is unavoidable, it only makes sense to accept its inevitability and develop methods for mitigating the consequences. One reasonable approach is to rotate the use of antibiotics. This has been implemented in many ways and there are recent studies to model the optimal duration, mixing versus cycling, and how relaxed antibiotic cycles may be and still function as planned. However, none of those models have focused on developing a method for designing an optimal succession of antibiotics.

Which is exactly what the Time Machine software attempts to do. As the study mentioned, antibiotic cycling has been studied before, but it was the marriage of evolutionary biology and mathematics that made the difference, Barlow said.

“We took the theory of adaptive landscapes and used it as a foundation for organizing our data and with that style of organized data, a team of mathematicians was able to develop a model,” Barlow said. “It’s just one of those wonderful things that just falls into place.”

Study co-author Kristina Crona, an assistant professor in the Department of Mathematics and Statistics at American University, noted that the antibiotic cycling problem perfectly illustrates the role of mathematics in biology. Unfortunately, she told me, the two fields don’t have a strong tradition of working together.

“How can we use antibiotics as well as possible? What sequence of drugs would be best? How do we find the best treatment plan?” Crona asked. “These are all very quantitative problems. If we can move medicine and biology closer to mathematics, I think it would produce all kinds of advantages.”

Of course, the ultimate goal is to create a cycling protocol that physicians and health care providers can easily use in a clinical setting, Barlow said. To begin that translation process, Barlow and colleagues are collaborating with a hospital in Merced to collect resistant isolates and analyze the relationship between the development of antibiotic resistance and the antibiotics being prescribed by staff.

“I’ve gotten so many great responses (to the study), it’s almost overwhelming,” Barlow told me. “This idea that we’ve been working on for so long — to see it progress to a level that people can understand and get excited about is so rewarding.”

To read a full copy of the study, visit PLOS ONE. To learn more about antibiotic resistance, visit Keep Antibiotics Working.

Kim Krisberg is a freelance public health writer living in Austin, Texas, and has been writing about public health for more than a decade.

26 May 18:50

NASA thought Sally Ride needed 100 tampons for 1 week “just to be safe.” From what?

by Amanda Taub

NASA does math in orders of magnitude...

Sally Ride — physicist, first American woman in space, and heroine of my youth — would have been 64 today.

Ride's accomplishments were extraordinary. But as Ann Friedman pointed out in her excellent profile of the astronaut, being first also had its burdens. Including, for instance, the need to disabuse NASA's male scientists of their spectacularly wrong expectations with regard to tampons:

[In preparation for Ride's trip aboard the Space Shuttle] Tampons were packed with their strings connecting them, like a strip of sausages, so they wouldn’t float away. Engineers asked Ride, "Is 100 the right number?" She would be in space for a week. "That would not be the right number," she told them.

Apparently, the male engineers then replied that "we just want to be safe."

That's right. NASA was under the impression that Sally Ride's seven days in space would require 100 tampons. We have many, many questions about the logic behind that theory:

  1. NASA just wanted to "be safe" from what exactly?
  2. What did they think was going to happen up there?
  3. Were the engineers under the impression that tampons were a defense against space monsters?
  4. Or bears?
  5. Or was NASA concerned that tampons were a new and unreliable technology?
  6. Was NASA worried that some of the 100 tampons might not be effective for their intended use?
  7. Were some of the tampons intended as fail-safe measures to ensure that any individual tampon failure would be an isolated event, rather than a total systemic collapse?
  8. Real talk: did the NASA engineers know what tampons are for?
  9. Seriously, were they also going to send a couple of liters of O-negative and an IV up to space, to replenish Ride after all the hemorrhaging they apparently thought she would be doing in her spare time?
  10. So are we to conclude that the space program's legendarily selective physical standards just said ¯\_(ツ)_/¯ in the section on "women's health"?
  11. Or were they under the impression that women like to try and discard multiple tampons in a row, Goldilocks-style, before they find one that is just right?
  12. When Sally Ride joined the space program, did anyone tell her that one of her duties would be explaining women's hygiene products to her blissfully ignorant male colleagues?
  13. Have we properly honored her for the contributions to gender equality those conversations would have constituted?
  14. Shouldn't her Google Doodle recognize her achievement in having uncomfortable conversations about hygiene products, thus blazing a path for future women to head into orbit?
  15. As in, "Sally Ride: physicist, astronaut, educator, and savior of future generations from who knows how many @O*#%# stupid conversations about how periods work?"
  16. Whom do we talk to about inscribing that description on a commemorative plaque?
  17. Or at least a certificate suitable for framing?

27 May 17:23

From The Atlanta Journal-Constitution

by Joe Jervis
(Tipped by JMG reader Mykel)

UPDATE: Urban Dictionary has it too.
27 May 12:30

The most predictable disaster in the history of the human race

by Ezra Klein

I. Bill Gates is an optimist.

Ask him, and he'll tell you himself. "I'm very optimistic," he says. See?

And why shouldn't Bill Gates be an optimist? He's one of the richest men in the world. He basically invented the form of personal computing that dominated for decades. He runs a foundation immersed in the world's worst problems — child mortality, malaria, polio — but he can see them getting better. Hell, he can measure them getting better. Child mortality has fallen by half since 1990. To him, optimism is simply realism.

But lately, Gates has been obsessing over a dark question: what's likeliest to kill more than 10 million human beings in the next 20 years? He ticks off the disaster movie stuff — "big volcanic explosion, gigantic earthquake, asteroid" — but says the more he learns about them, the more he realizes the probability is "very low."

Like this video? Subscribe to Vox on YouTube.

Then there's war, of course. But Gates isn't that worried about war because the entire human race worries about war pretty much all the time, and the most dangerous kind of war, nuclear war, seems pretty contained, at least for now.

But there's something out there that's as bad as war, something that kills as many people as war, and Gates doesn't think we're ready for it.

"Look at the death chart of the 20th century," he says, because he's the kind of guy that looks at death charts. "I think everybody would say there must be a spike for World War I. Sure enough, there it is, like 25 million. And there must be a big spike for World War II, and there it is, it's like 65 million. But then you'll see this other spike that is as large as World War II right after World War I, and most people, would say, 'What was that?'"

"Well, that was the Spanish flu."

II. The most predictable threat in the history of the human race

No one can say we weren't warned. And warned. And warned. A pandemic disease is the most predictable catastrophe in the history of the human race, if only because it has happened to the human race so many, many times before.

In a 1990 paper on "The Anthropology of Infectious Disease," Marcia Inhorn and Peter Brown estimated that infectious diseases "have likely claimed more lives than all wars, noninfectious diseases, and natural disasters put together." Infectious diseases are our oldest, deadliest foe.

And they remain so today. "In a good year, flu kills over 10,000 Americans," says Dr. Thomas Frieden, director of the Centers for Disease Control and Prevention. "In a bad year, it kills over five times that. If we have a pandemic, it will be much worse. People think the H1N1 flu wasn’t so bad. But more than 1,000 American kids died from H1N1!"

Each new year seems to bring its own sensational candidate for the next pandemic. In 2014, of course, it was  the Ebola outbreak — which killed more than 10,000 people, and sent much of America into hysterics. This year, a particularly infectious form of bird flu has ripped through 14 states, killing or forcing the slaughter of 39 million birds. Public health authorities are forcing the grisly massacre because the more birds around for the flu to infect, the more chances the flu has to mutate and reassemble itself into a form that can infect humans.

It isn't just the news that carries warnings. The culture is thick with our fear of infectious disease. Zombies, for instance, are everywhere — World War Z was a best-selling book and a blockbuster movie; The Walking Dead has become one of television's most popular shows. And zombies are a metaphor for infectious disease.

"When I was a kid, I watched AIDS go from an obscure, arcane curiosity to a global pandemic," Max Brooks, author of World War Z, told the CDC. "What drove me crazy was that unlike the Black Death or the Spanish Influenza, AIDS could have simply been stopped by a pamphlet: A couple dos and don’ts, a little education and clear-headed leadership and it might have ended up as a footnote in a virologists’ medical text. If that’s not zombies, I don’t know what is."

The CDC has even released a document titled "Preparedness 101: Zombie Apocalypse." The point, obviously, isn't that the CDC expects a zombie apocalypse around the corner; it's that since a zombie apocalypse is simply an infectious disease apocalypse, talking about how to avoid becoming a zombie is a safe way for people to talk about how to protect themselves from pandemic disease.

"When confronted with real anxiety, a lot of people shut down," Brooks said. "For them, planning for an actual crisis is just too scary, too paralyzing to think about. Make it a zombie attack, though, then there’s some psychological padding."

Pandemic disease is something our culture thinks about, knows about, fears. It's so topmost on our minds and in our nightmares that we've created an elaborate metaphorical architecture so we can talk about it even with people who are too scared to talk about it. We think about it so much, it seems almost ridiculous that we aren't ready. But we're not. Not even close.

Just look what happened with Ebola.

III. The "luck" of the Ebola outbreak

Ron Klain was an odd choice for Ebola czar.

Klain entered the Obama administration as Vice President Joe Biden's chief of staff. This was, itself, notable: Klain was chief of staff to Vice President Al Gore, too, making him the only person to serve in that position for two different vice presidents.

He quickly proved himself an exceptional fixer for the Obama administration, with a mix of policy, political, and bureaucratic chops that everyone agreed was rare. And so when President Obama needed someone to coordinate the US government's response, he turned to Klain. And Klain did his job. After a few early, botched cases, the Ebola outbreak ended on American soil. Ebola became what Americans were used to it being: someone else's problem.

(John Moore/Getty Images)

But talk to Klain today, and he doesn't sound like a guy exulting in victory. He sounds scared. He doesn't think Ebola showed that America's response can work. He thinks it showed how easily it could fail.

"You can’t use the word lucky or fortunate about something like Ebola that killed 10,000 people," Klain says. "But it was the most favorable scenario for the world to face one of these things. Ebola is very difficult to transmit. Everyone who is contagious has a visible symptom. It broke out in three relatively small countries that don’t send many travelers to the US. And those three countries have good relationships with America and were welcoming of Western aid."

"With a pandemic flu, the disease would be much more contagious than Ebola," Klain continues. "The people who are contagious may not have visible symptoms. It could break out in a highly populous country that sends thousands of travelers a day to the US. It could be a country with megacities with tens of millions of people. And it could be a country where sending in the 101st Airborne isn’t possible."

Ebola, Klain thinks, shows how unprepared the world was for a disease that it's known about for decades and that, comparatively speaking, spreads pretty slowly. A person infected with Ebola can be expected to pass the disease on to two people, barring effective countermeasures (epidemiologists call this the "reproduction number"). Two is not that high, as these things go. The SARS virus had a reproduction number of four. Measles has a reproduction number of 18.

What happens when the world faces a lethal disease we're not used to, with a reproduction number of five or eight or 10? What if it starts in a megacity? What if, unlike Ebola, it's contagious before the patient is showing obvious symptoms?

Past experience isn't comforting. "If you look at the H1N1 flu in 2009," Klain says, "it had spread around the world before we even knew it existed."

IV. How human beings have helped infectious disease

Behind Gates's fear of pandemic disease is an algorithmic model of how disease moves through the modern world. He funded that model to help with his foundation's work eradicating polio. But then he used it to look into how a disease that acted like the Spanish flu of 1918 would work in today's world.

The results were shocking, even to Gates. "Within 60 days it's basically in all urban centers around the entire globe," he says. "That didn't happen with the Spanish flu."

Airports: first you pack people into a tight space together to share germs, and then you send them all over the world. (Carl de Souza/AFP/Getty Images)

The basic reason the disease could spread so fast is that human beings now move around so fast. Gates's modelers found that about 50 times more people cross borders today than did so in 1918. And any new disease will cross those borders with them — and will do it before we necessarily even know there is a new disease. Remember what Ron Klain said: "If you look at the H1N1 flu in 2009, it had spread around the world before we even knew it existed."

Gates's model showed that a Spanish flu–like disease unleashed on the modern world would kill more than 33 million people in 250 days.

"We've created, in terms of spread, the most dangerous environment that we've ever had in the history of mankind," Gates says.

V. Underdeveloped health systems threaten developed countries

The science fiction writer William Gibson has a good line: the future is already here, it's just not evenly distributed. And nowhere is that truer than in health care.

According to the World Health Organization, the United States spends more than $8,000 per person, per year, on health care. Eritrea spends less than $20. Traditionally, Americans thinks of that as Eritrea's problem. But if a highly infectious, highly lethal new disease presents in Eritrea, and the world is slow to learn about it, then it will quickly become America's problem.

This is, of course, what happened with Ebola. If it had made its first appearance in the United States, it likely would have been caught, and contained, quickly. But as my colleague Julia Belluz wrote, the countries where the 2014 outbreak began "happen to be three of the poorest in the world, and it took them at least three months to even realize they were harboring an Ebola outbreak." By the time Ebola was recognized, it was already out of control — and so, for the first time, it made its way to American shores.

When I ask the CDC's Frieden what's needed to catch these diseases early, he doesn't hesitate. "The most effective way to protect people is basic public health infrastructure," he says. "That means laboratories for finding specimens, getting them tested, and discovering what's spreading. It means field epidemiologists. It means emergency operation centers. And you need to have that available day in and day out. If we've learned anything, it’s that you want an everyday public health system you can scale up for an emergency, not a system you only use in case of emergencies."

The good news is this kind of system isn't all that expensive. Basic public health infrastructure is fairly cheap — around a dollar per person, per year. "There’s no magic here," says Frieden. "In Uganda, you have motorcycle couriers picking up specimens from hundreds and hundreds of health-care centers all over the country. They then send them to centralized centers. The expense isn't huge."

The difficulty often isn't money; it's priorities. These aren't sexy investments. "It doesn’t cost nearly as much as building a fancy hospital in your capital," says Frieden, with evident frustration.

But if you can find the disease and test it, then modern technology really does come into play. We can rapidly decode the basic structure and pathways of new diseases in ways that were unimaginable even a few decades ago — and that means we can come up with a response much more rapidly.

That's the good news.

The bad news? "You need a government that works," sighs Frieden.

VI. "Are we sure [the WHO] can do better next time? No."

Pandemic infections present three basic problems of governance. The first is countries that don't want to admit they need international help because they don't want to admit they have a problem in the first place.

"Guinea did not want to declare an Ebola epidemic," Gates says, "because in terms of investors and travel, it's a death sentence."

And it wasn't just Guinea, or even just Ebola. As Michael Specter wrote in the New Yorker:

If SARS had been more contagious, it would have created the new millennium’s first grave public-health crisis. And yet, in 2002, after it first appeared, Chinese leaders, worried about trade and tourism, lied about the presence of the virus for months—insuring that it would spread. In 2004, when avian influenza first surfaced in Thailand, officials there displayed a similar reluctance to release information.

The second is countries that can't admit international help, either because the state is too weak and fragmented to effectively coordinate with international actors or because the state is hostile to the organizations that would need to come in and offer relief. Imagine an outbreak that begins in Syria right now, and you get the idea.

(Ed Jones/AFP/Getty Images)

The third problem is that no one really trusts the efficacy of the international institutions that would most naturally coordinate the response.

There is no other way to say this: the World Health Organization's Ebola performance was a disaster. "The WHO’s slow response to Ebola has been universally condemned," reported the Guardian. "The director general’s committee — which can declare a public health emergency — was not convened until August, eight months after the first cases and five months after public warnings from Médecins Sans Frontières, whose doctors were on the front line."

Germany's Chancellor Angela Merkel is now leading an effort to reform the organization. But similar mistakes during the SARS crisis and H1N1 have led to similar calls for WHO reform, and little has happened.

This isn't just an issue of bureaucratic incompetence. The WHO is underpowered for the problems it's meant to solve. About 75 percent of its funding comes from voluntary donations, and there's no mechanism by which it can quickly scale its efforts during an emergency. The WHO's member countries could fix this by giving the WHO more reliable, permanent funding — or even more reliable emergency funding mechanisms. But so far, no suggestions along those lines have gained much traction.

The result is that the WHO that will face the next major disease outbreak is likely to be quite similar to the WHO that faced Ebola, and H1N1, and SARS. As a senior US delegate to the World Health Assembly told Vox, "Are we sure [the WHO] can do better next time? No."

Whether through the WHO or some other mechanism, most experts agree that the world needs some kind of emergency-response team for dangerous diseases. But no one knows quite how to set up that team. "That's what we’re lacking in the global system — a battalion of people in white helmets," says Klain. "But who will own it? Control it? Pay for it? Deploy it? Those are the tricky things."

This is in stark contrast to war, which is not necessarily more deadly to the human race, but is much better planned for. "When you talk about war," Gates says, "there are all these rules about how the government can seize various ships. But when an epidemic comes along, who is supposed to survey the private capacity and go out there and grab all these things?"

Look at what happened during Ebola, Gates continues. "Where was the equivalent of the military reserve, where you get on the phone and you said to people, Now come! And they had been trained, and they understood how to work together. People who want to volunteer, do we pay them? What do we do with them after they come back, when people might have this fear that they've been exposed? Are employers going to take them back? What are the quarantine rules? It was completely ad hoc."

This is what's so maddening about the modern fight with epidemic disease. Unlike in past eras, humanity has the tools it needs to protect itself. But global travel has far outpaced global governance — or even global disease response. Diseases move much faster than governments. "This is the hole in the global system," Klain says, and no one really knows how to fix it.

27 May 17:19

How to turn a liberal hipster into a capitalist tyrant in one evening


via Burly. Sharing so my husband who uses Pandemic in the classroom can try this.

World Factory … how would you cope?
World Factory … how would you cope? Photograph: photograph by David Sandison

The choices were stark: sack a third of our workforce or cut their wages by a third. After a short board meeting we cut their wages, assured they would survive and that, with a bit of cajoling, they would return to our sweatshop in Shenzhen after their two-week break.

But that was only the start. In Zoe Svendsen’s play World Factory at the Young Vic, the audience becomes the cast. Sixteen teams sit around factory desks playing out a carefully constructed game that requires you to run a clothing factory in China. How to deal with a troublemaker? How to dupe the buyers from ethical retail brands? What to do about the ever-present problem of clients that do not pay? Because the choices are binary they are rarely palatable. But what shocked me – and has surprised the theatre – is the capacity of perfectly decent, liberal hipsters on London’s south bank to become ruthless capitalists when seated at the boardroom table.

The classic problem presented by the game is one all managers face: short-term issues, usually involving cashflow, versus the long-term challenge of nurturing your workforce and your client base. Despite the fact that a public-address system was blaring out, in English and Chinese, that “your workforce is your vital asset” our assembled young professionals repeatedly had to be cajoled not to treat them like dirt.

Related: World Factory review – interactive play smartly unravels fashion industry

And because the theatre captures data on every choice by every team, for every performance, I know we were not alone. The aggregated flowchart reveals that every audience, on every night, veers towards money and away from ethics.

Svendsen says: “Most people who were given the choice to raise wages – having cut them – did not. There is a route in the decision-tree that will only get played if people pursue a particularly ethical response, but very few people end up there. What we’ve realised is that it is not just the profit motive but also prudence, the need to survive at all costs, that pushes people in the game to go down more capitalist routes.”

In short, many people have no idea what running a business actually means in the 21st century. Yes, suppliers – from East Anglia to Shanghai – will try to break your ethical codes; but most of those giant firms’ commitment to good practice, and environmental sustainability, is real. And yes, the money is all important. But real businesses will take losses, go into debt and pay workers to stay idle in order to maintain the long-term relationships vital in a globalised economy.

Why do so many decent people, when asked to pretend they’re CEOs, become tyrants from central casting? Part of the answer is: capitalism subjects us to economic rationality. It forces us to see ourselves as cashflow generators, profit centres or interest-bearing assets. But that idea is always in conflict with something else: the non-economic priorities of human beings, and the need to sustain the environment. Though World Factory, as a play, is designed to show us the parallels between 19th-century Manchester and 21st-century China, it subtly illustrates what has changed.

A worker in a Chinese clothing factory
A worker in a Chinese clothing factory Photograph: Imaginechina/Corbis

A real Chinese sweatshop owner is playing a losing game against something much more sophisticated than the computer at the Young Vic: an intelligent machine made up of the smartphones of millions of migrant workers on their lunchbreak, plugging digitally into their village networks to find out wages and conditions elsewhere. That sweatshop owner is also playing against clients with an army of compliance officers, themselves routinely harassed by NGOs with secret cameras.

The whole purpose of this system of regulation – from above and below – is to prevent individual capitalists making short-term decisions that destroy the human and natural resources it needs to function. Capitalism is not just the selfish decisions of millions of people. It is those decisions sifted first through the all-important filter of regulation. It is, as late 20th-century social theorists understood, a mode of regulation, not just of production.

Yet it plays on us a cruel ideological trick. It looks like a spontaneous organism, to which government and regulation (and the desire of Chinese migrants to visit their families once a year) are mere irritants. In reality it needs the state to create and re-create it every day.

Banks create money because the state awards them the right to. Why does the state ram-raid the homes of small-time drug dealers, yet call in the CEOs of the banks whose employees commit multimillion-pound frauds for a stern ticking off over a tray of Waitrose sandwiches? Answer: because a company has limited liability status, created by parliament in 1855 after a political struggle.

Related: Chinese factory activity slumps to lowest for a year as demand slows

Our fascination with market forces blinds us to the fact that capitalism – as a state of being – is a set of conditions created and maintained by states. Today it is beset by strategic problems: debt- ridden, with sub-par growth and low productivity, it cannot unleash the true potential of the info-tech revolution because it cannot imagine what to do with the millions who would lose their jobs.

The computer that runs the data system in Svendsen’s play could easily run a robotic clothes factory. That’s the paradox. But to make a third industrial revolution happen needs something no individual factory boss can execute: the re-regulation of capitalism into something better. Maybe the next theatre game about work and exploitation should model the decisions of governments, lobbyists and judges, not the hapless managers.

Paul Mason is economics editor of Channel 4 News. Follow him @paulmasonnews Read his blog here. World Factory runs at the Young Vic until 6 June.

26 May 14:16

Google Doodle Honors Sally Ride

by Joe Jervis

Youngest American in space (at the time).

26 May 16:00

No ID, No Idea, Part 21

by BD

a good public shaming story.

Retail | PA, USA

(The company that I work has recently switched to an ‘ID All’ policy. We gave all customers more than a two weeks’ notice in several ways (handouts, print outs at the bottom of the receipt, and warning every customer that makes a tobacco purchase that they will have to provide a photo ID starting on a certain date.) Despite our best to make the transition a smooth one, there are many customers, particularly regulars, who are not too happy with this. It has now been more than a month since the policy has been in place and people are still complaining. The gentleman appears to be in his late forties. I also am very petite and have always looked much younger than I am.)

Me: “Hello, sir, how are you doing today?”

Him: “Fine. Gimme [Brand cigarettes].”

Me: *grabbing that brand* “Okay, will that be all?”

Him: “Of course that’s all!”

Me: “All right, could I please see your ID?”

Him: “Are you f***ing kidding me? I’m old enough to be your grandfather!”

(I’m used to this type of reaction from our customers around his age and patiently start to explain our new policy. There are multiple signs all over the place stating that we ID all, as well as a stack of pamphlets right in front of him explaining the policy. All the while he is hurling abuse at me.)

Him: “How old do you have to be to buy cigarettes?”

Me: “PA requires you to be 18, though some states are 19.”

Him: “Well, I’m obviously older than that! Are you even old enough to be selling those things?”

Me: “There is no age requirement to sell tobacco products, although [Company] does require you to be 16 to work here. I am, however, 18, so I could legally purchase this with proper identification.”

Him: “There’s no f****** way you’re 18. When’s your birthdate?”

(I’ve had this question asked many times by customers who don’t believe I’m old enough to work at the store, so I usually just give it to them.)

Him: “Seriously? I thought you were like 12!”

Me: “So, you were willing to harass a 12 year old over a policy that I have no control over, which has been in effect for 6 weeks, and which had a further 2 weeks notice given to all customers, despite the fact that there is ample signage everywhere, including right next to [Brand cigarettes he wanted to purchase] which states that we require identification from EVERYONE, all the while causing a disturbance and holding up all of the customers in line behind you, simply because you refuse to show your ID? Which if I do not ask for and see I will lose my job and be required to pay a fine, and all of this because you’re offended that I asked for your ID? I know that you are over 18. Everyone here can clearly see that you are over 18! Now can I please just see your ID? If not I cannot and will not sell you the cigarettes, and I will help the person behind you, who has been patiently waiting the entire time you’ve been causing this ruckus!”

(He meekly hands over his ID and pay for his cigarettes without saying another word. The next person in line steps up as he’s putting his change back into his wallet. This customer has his ID already out.)

Customer #2: *pleasantly* “Hello there! How are you doing today?”

Me: “I’m doing pretty good; thanks for asking! What can I get for you today?”

Customer #2: “I would like two boxes of [Brand], please. And of course you can have my ID!”

(The first customer left as quickly as possible, but not before glaring at Customer #2.)

No ID, No Idea, Part 20
No ID, No Idea, Part 19
No ID, No Idea, Part 18

25 May 04:00



Missed Doctor Pepper and Doctor Bombay

I'm staring at the "doctor" section, and I can't help but feel like I've forgotten someone.
25 May 15:13

Saturday Morning Breakfast Cereal - Why Do Parents Fight?


Hovertext: Bad muppets are WAY more realistic than humans with funny heads!

New comic!
Today's News:
21 May 19:10

Austin, Texas, Is Blowing Away Every Other Big City in Population Growth

by Jordan Weissmann

Yes! My town will double in population in 15 years.

Austin, Texas, keeps tearing along as the fastest growing big city in the country. According to census figures released today, its population shot up 2.9 percent during the 12 months that end in July of 2014. Among the 50 largest American cities (and to be clear, these figures are for cities proper, not their wider metro areas), the next closest on the list was Denver, which managed 2.4 percent growth. Which is to say, no other major city even came close to Austin's pace of expansion, much as has been the case ever since the recession ended.

Three quick takeaways after the graph:

Americans still love the sun

Overall, the vast majority of U.S. population growth is concentrated in the South and West. Likewise, the cities topping this list are mostly scattered through states like Texas, California, North Carolina, Florida, and Colorado (And did I say Texas? Let me say it one more time: Texas.). There are some prominent Eastern and Midwestern exceptions in Minneapolis; Washington, D.C.; and Columbus, Ohio. But the Sun Belt's lure of cheap housing and warm weather isn't losing its appeal to Americans. That said, Pacific boomtowns like Seattle and San Francisco are benefiting more from hot job markets, which have outweighed their miserably expensive real estate prices. 

America's fastest growing city is really a fast growing suburb

Austin is a delightful town with a growing number of high-rises and walkable neighborhoods packed with trendy restaurants and bars. But it's nearly impossible to survive there without a car, an enormous amount of the housing stock is made up of single-family homes, and it's not actually all that noticeable when you leave its city limits. Like many of the places topping the growth charts, much of it just feels like a large collection of 'burbs. Today on FiveThirtyEight, Trulia economist Jed Kolko points out that based on neighborhood density, Austin could be considered more than half suburban, as would other high-growth locales like Phoenix; Fort Worth, Texas; Charlotte, North Carolina; and Houston. So while cities are growing as political units (which is wonderful for their tax bases), that doesn't necessarily mean Americans living in them are opting for what most of us would consider urban lifestyles.

Only a few big cities are outpacing their wider regions

Given how indistinguishable some major cities are from the suburbs that ring them, I got to wondering: How many are actually growing faster than their wider metro area? Not many, it turns out. Among the top 50 cities, only 14 had grown faster than the state as a whole and grown at least 0.2 percentage points faster than their wider metro area (sadly, the census didn't includes a margin of error on its growth estimates, so I wanted to leave a little room). Of that group, the star destinations were Portland, Oregon, and Miami, which are just about as far apart, geographically and spiritually/culturally/psychologically, as two cities could be.

22 May 12:32

GALLUP: Americans Continue To Greatly Overestimate Percentage Of Gays In USA

by Joe Jervis

Was this sample taken in West Hollywood? 33% of people think more than 25% of Americans are gay or lesbian?

Via Gallup:
The American public estimates on average that 23% of Americans are gay or lesbian, little changed from Americans' 25% estimate in 2011, and only slightly higher than separate 2002 estimates of the gay and lesbian population. These estimates are many times higher than the 3.8% of the adult population who identified themselves as lesbian, gay, bisexual or transgender in Gallup Daily tracking in the first four months of this year. The stability of these estimates over time contrasts with the major shifts in Americans' attitudes about the morality and legality of gay and lesbian relations in the past two decades. Whereas 38% of Americans said gay and lesbian relations were morally acceptable in 2002, that number has risen to 63% today. And while 35% of Americans favored legalized same-sex marriage in 1999, 60% favor it today.
Only 9% of respondents estimated the LGBT population to be under 5%.
20 May 12:10

How America became the most powerful country on Earth, in 11 maps

by Max Fisher

We take it for granted that the United States is the most powerful country on Earth today, and perhaps in human history. The story of how that came to be is long, fascinating, complex — and often misunderstood. Here, excerpted in part from "70 maps that explain America," are maps that help show some of the key moments and forces that contributed to the US's rise as sole global superpower.


Because of a war that left North America vulnerable to British and America conquest

So much of America's power comes from its size: it is one the largest countries on Earth by population and area, and is rich in natural resources and human capital. It is also in many ways an island nation; because it faces no major threats on its borders, it is freer to project power globally.

There was no reason that North America's borders had to become what they are. A key moment in how that happened came with the French and Indian War, at the time just a sideshow in the larger Seven Years' War in Europe. The war ended with France giving up its vast territory on the continent to Britain and Spain. Napoleon would seize back Louisiana and sell it to the US in 1803, but New France was lost forever. With the Spanish Empire already declining, the continent was left open to conquest from the British Empire and its successor, the United States.

Image credit: University of Maine


By stealing Native Americans' land for an entire century

Of course, North America was not empty when European explorers and settlers arrived — it was filled with diverse, long-established societies. They may well have become sovereign nation-states had the US not sought to purge them from their lands, deny them self-rule, and, once they had been reduced to a tiny minority, forcibly assimilate them and their land. These acts are the foundation upon which American dominance of North America, and thus American global power, was built.

This map begins by showing Native Americans' land in 1794, demarcated by tribe and marked in green. In 1795, the US and Spain signed the Treaty of San Lorenzo, carving up much of the continent between them. What followed was a century of catastrophes for Native Americans as their land was taken piece by piece. By the time the US passed the Dawes Act in 1887, effectively abolishing tribal self-governance and forcing assimilation, there was very little left.

Image credit: Sam B. Hillard/Sunisup


By taking land from Mexico in another war

American expansionism could only go so far. Upon Mexico's independence in 1821, it gained vast but largely unincorporated and uncontrolled Spanish-claimed lands from present-day Texas to Northern California. American settler communities were growing in those areas; by 1829 they outnumbered Spanish speakers in Mexico's Texas territory. A minor uprising by those American settlers in 1835 eventually led to a full-fledged war of independence. The settlers won, establishing the Texas Republic, which they voluntarily merged with the United States in 1845.

But Mexico and the US still disputed the Texas borders, and President James K. Polk wanted even more westward land to expand slavery. He also had designs on Mexico's California territory, already home to a number of American settlers. War began in 1846 over the disputed Texas territory, but quickly expanded to much of Mexico. A hard-line Mexican general took power and fought to the bitter end, culminating in the US invading Mexico City and seizing a third of Mexico's territory, including what is now California, Utah, Nevada, Arizona, New Mexico, and Texas. Had the war gone differently, or had Polk not sought these Mexican lands, the US would today be a much smaller country — and perhaps with no Pacific coast — making it less powerful globally, and particularly in the increasingly important Pacific region.

Image credit: Kaidor/Wikipedia


By choosing to become a European-style imperial power

If there were a single moment when the US became a global power, it was the war with Spain. The Spanish Empire had been crumbling for a century, and there was a ferocious debate within the US over whether America should become an imperial power to replace it. This centered on Cuba: pro-imperialists wanted to purchase or annex it from Spain (pre-1861, the plan was to turn it into a new slave state); anti-imperialists wanted to support Cuban independence.

In 1898, Cuban activists launched a war of independence from Spain, and the US intervened on their side. When the war ended in Spanish defeat, US anti-imperialists blocked the US from annexing Cuba, but pro-imperialists succeeded in placing it under a quasi-imperialist sphere of influence; the US base at Guantanamo Bay is a relic of this arrangement. The war also ended with the US taking three other Spanish possessions: Puerto Rico, Guam, and the Philippines, a massive and populous island nation in the Pacific. The US had become a European-style imperial power. While this experiment in colonialism was short-lived and controversial at home, it began America's role as a major global power.

Image credit: Anand Katakam


Through colonialism in the Pacific — and by stealing Hawaii

America's brief experiment with overt imperialism came late in the game, and mostly focused on one of the last parts of the world carved up by Europe: the Pacific. This began in Hawaii, then an independent nation. American businessmen seized power in an 1893 coup and asked the US to annex it. President Cleveland refused to conquer another nation, but when William McKinley took office he agreed, absorbing Hawaii, the first of several Pacific acquisitions. Japan soon entered the race for the Pacific and seized many European-held islands, culminating in this 1939 map, two years before America joined World War II.

Image credit: Emok


Because World War I devastated Europe — and not the US

For centuries, the world had been divided among several competing global powers. No one country had hope of becoming the sole global superpower in such a system. World War I was the beginning of the end of that era. These six dots represent not just the major participants in the first World War, but the countries that, at the time, were the world's great powers. A seventh great power, the Ottoman Empire, was dismantled outright as a result of the war. (China, perhaps another great power, had been declining for some time.) As you can see, the destruction of the war and the massive war debts absolutely devastated the economies of the great powers — except, that is, for the United States and the still-mighty British Empire.

Image credit: Stephen Broadberry/Mark Harrison


Because World War II devastated Europe and Asia

It is impossible to fully capture the toll of the second world war in any one metric, but this map of military deaths can serve as a telling shorthand. While the war was terribly costly for all involved, the human cost was disproportionately felt by the two primary Axis powers — Germany and Japan — and particularly by the Soviets and Chinese, as well as by other countries in Eastern Europe and East Asia caught in the war machines. These military deaths merely hint at the much larger death toll in both continents from war, famine, and genocide, as well as economic and ecological devastation. While Americans paid dearly, as well — enduring the deaths of 400,000 military personnel — the US came out of the war far more powerful by virtue of everyone else's decline.

Image credit: Tyson Whiting


Because European colonialism collapsed — but not the American or Russian empires

This animated map showing the rise and fall of European (as well as Japanese and Ottoman) imperialism is fascinating all the way through, but things get really interesting from 1914 through the end. In just a few years after World War II, the centuries-long project of European colonialism collapses almost entirely. The reasons for this were many: the rise of independence movements in Latin America, then in Africa and Asia; the collapse of European economies that drew them back home; and, with postwar colonial misadventures like the 1956 Suez Crisis, a sense that the new world order was not going to tolerate colonialism anymore. In any case, the world was left with two enormous land empires that happened to have European roots: the United States and the Soviet Union.

Image credit: Asuros


By dividing up the world in the Cold War

After the world wars and the end of colonialism, the global system went from many competing powers to exactly two: the US and the Soviet Union. Both had competing ideologies, competing interests in Europe and Asia, and deep mutual distrust. While that might have normally led to war, the horrifying power of nuclear weapons kept them from fighting outright. Instead, the US and Soviet Union competed for global influence.

American and Soviet fears of a global struggle became a self-fulfilling prophecy: both launched coups, supported rebellions, backed dictators, and participated in proxy wars in nearly every corner of the world. Both built up systems of alliances, offshore bases, and powerful militaries that allowed each to project power across the globe.

By 1971, the US and the Soviet Union had settled into a stalemate; this map shows the world as it had been utterly divided. In 1979, the Soviets invaded Afghanistan; a year later, Ronald Reagan ran for president, promising to end the détente and defeat the Soviet Union. The Soviet Union, along with many of its trappings of global power, disintegrated — leaving the United States with a vast global architecture of military and diplomatic power that was suddenly unchallenged.

Image credit: Minnesotan Confederacy


Because Europe unified under American-dominated NATO

In 1948, the Soviet Union blockaded Berlin from Western Germany. The next year, the powers of Western Europe joined with the US and Canada in signing a collective defense — the North Atlantic Treaty Organization — meant to deter Soviet aggression and counterbalance the Soviet Union in Europe. It expanded during the Cold War to include virtually every European country west of the Soviet bloc. This may have staved off another war in Europe by pledging that the US would defend any member as it would its own soil. It also left Western Europe, once full of independent powers that jostled against one another and against the United States, unified against a common threat — and led by its most powerful member, the United States.

That dynamic did not really change after the Cold War ended. NATO expanded, acquiring new members in Central and Eastern Europe that still feared Russia. NATO ensures the stability of Europe and the security of its members, but at a cost: Europe's nations are now reliant upon, and thus yoked to, American power. This dynamic has played out in several places across the globe — South Korea and Japan are similarly tied to the US through security agreements and American military bases, for example — but it is most clearly pronounced in Europe.

Image credit: Arz


By outspending the next dozen countries combined on defense

Another way to show America's status as the sole global superpower is its military budget: larger than the next 12 largest military budgets on Earth, combined. That's partly a legacy of the Cold War, but it's also a reflection of the role the US has taken on as the guarantor of global security and the international order. For example, since 1979, the US has made it official military policy to protect oil shipments out of the Persian Gulf — something from which the whole world benefits. At the same time, other powers are rapidly growing their militaries. China and Russia in particular are rapidly modernizing and expanding their armed forces, implicitly challenging global American dominance and the US-led order.

Image credit: International Institute for Strategic Studies/Agence France-Presse


By virtue of America's scientific edge — and its democracy, creativity, and draw for immigrants

The US is so powerful for reasons other than its size, its military might, and its global system of alliances and bases — although those are certainly important. There is also America's tremendous advantage in scientific research, which both furthers and is an expression of its technological and economic lead on much of the rest of the world; it's also an indicator of innovation more broadly. An imperfect but revealing shorthand for that is the US's tremendous lead in Nobel prizes from its 1901 inception through 2013, when I made this map (the US has not lost its Nobel lead since then). The US has won 371 Nobels, mostly in the sciences; the US thus accounts for 4 percent of the world population but 34 percent of its Nobel laureates. This is the result of many factors: wealth, a culture and economy that encourage innovation, education, vast state- and private-funded research programs, and a political culture that has long attracted highly educated migrants. All of those factors contribute to American wealth and thus power in more ways than just Nobel prizes, but the sheer number of US laureates is a sign of the American advantage there.

Image credit: Max Fisher

20 May 14:07

The Internetest Safety Video On the Internet  By: Taking...


2.5 stars out of 4. Not awesome but not dull either.

The Internetest Safety Video On the Internet  

By: Taking Off 

We see you’re on the Internet. That’s great, because we made a safety video starring the Internet. Check out one of our awesome flight crews with some special appearances by Internet notables. Check it out here.

21 May 13:00

New research could change our understanding of the origins of residential segregation

by Matthew Yglesias

The most precise and intensive study of residential segregation ever done delivers some surprising conclusions about its origins in the United States. Trevon Logan and John Parman (from Ohio State University and the College of William and Mary, respectively) find that between 1880 and 1940, the odds of a white person living next door to a black one (or vice versa) essentially fell by half, and did so uniformly across all regions:

(Logan & Parman analysis of census data)

Of course people knew about segregation before, but the uniformity here is interesting. The researchers find that it happened in areas where there were few African Americans and in areas where they were many. They find that it happened in places lots of African Americans were moving to and also in places lots of African Americans were fleeing. Last but by no means least, they find that it happened in big cities and in rural areas alike.

This suggests that a lot of research that has focused on the specific dynamics of specific kinds of places — restrictive covenants in certain towns, influxes related to the Great Migration — is a little bit misguided. Local circumstances were important, of course, but the trend was fully nationalized.

A small caveat: they were able to reach this conclusion by delving into the actual manuscript pages of each decennial census and counting, block by block, the race of each respondent. That's an amazingly cool piece of research, letting them see literally how many African Americans had a white next-door neighbor. But this isn't necessarily the last word in what's the best way to characterize the level of segregation in a community. It would be nice to see a few different cuts at how to construct the segregation index before we fully embrace the conclusion.

21 May 10:40

Excel 2016: the elephant (not) in the room

by Jorge Camoes

Yes! Coming soon: Boxplots and waterfalls in Excel

Microsoft is apparently waking up and getting a bit more serious regarding visualization in Excel. Excel 2016 gets a few more chart types but, except for treemaps, nothing really new. We could make them with a little of work, but it’s nice to have them ready to use.

Here are the new chart types in Excel 2016:

Boxplots (box-and-wiskers)






Sunburst (I prefer to call them multi-level pies)






So, what’s the missing elephant? For me, the elephant is the ability to turn every single chart into small multiples. I would trade all those charts above for that functionality, any day.

What’s your missing elephant?

These images come from a presentation by Scott Ruble you can watch below:


Want to create better dashboards? Try the Excel Dashboard Tutorial.

Post from: Excel Charts Blog.

Excel 2016: the elephant (not) in the room

The original post is titled Excel 2016: the elephant (not) in the room , and it came from The Excel Charts Blog .

20 May 16:05

Pay dirt

ON MAY 19th the Los Angeles city council voted to raise its minimum wage, from $9 an hour to $15 an hour. Los Angeles is just one of many American cities using a rise in the minimum wage to try to address poverty and inequality. State and local governments are acting where the federal government has not. Just over half of American states have legal minimum wage rates above the federal minimum, which has stood at $7.25 an hour since 2009. Across most rich countries and American states, minimum wages tend to rise with income levels. Australia, with a real GDP per person of nearly $45,000, sets a minimum wage rate of $10.5 an hour—well above the $2.9 wage floor in Chile, where income per person is about half Australia's. Richer American states like Connecticut and California also tend to set higher minimum wage levels than poorer ones, like Florida.

Yet America as a whole is an outlier among advanced economies. Given the pattern across the rest of the OECD, a group of mostly rich countries, one would expect America, where GDP per person is $53,000, to pay a minimum wage around $12 an hour. That would mean a...Continue reading

20 May 15:12

Record-High 60% of Americans Support Same-Sex Marriage


Support for marriage equality jumped from 30 to 37% in the poll from 2014 to 2015.

  • At 60%, support is up five percentage points from 2014
  • Gay marriage support reaches new highs in all political parties
  • One in four say candidates must share their views on the issue

WASHINGTON, D.C. -- Sixty percent of Americans now support same-sex marriage, as the Supreme Court prepares to rule on its constitutionality next month. This is up from 55% last year and is the highest Gallup has found on the question since it was first asked in 1996.

Do you think marriages between same-sex couples should or should not be recognized by the law as valid, with the same rights as traditional marriages?

Public support for the legality of same-sex marriage first reached a majority in 2011, when 53% supported it. Since then, support has ranged from 48% to 55%. The five-percentage-point increase in this year's Values and Beliefs poll, conducted May 6-10, is the largest year-to-year climb since 2011, when support rose by nine points.

Support for the legality of gay marriages in the U.S. has been a fast-changing trend. Just two decades ago, only 27% of Americans backed gay marriage, while 68% opposed. By 2005, the percentage in favor had increased by 10 points to 37%, and by 2010 it had reached 44%.

The record high in support comes roughly one month before the Supreme Court is to issue a ruling on the constitutionality of same-sex marriage. Americans, at this point, are not highly familiar with the case, with 42% following it closely -- well below the average 60% for news issues Gallup has measured over the past two decades. Attention to the case is similar among supporters and opponents of gay marriage.

New Highs in Support Across Party Spectrum

Though same-sex marriage continues to be politically divisive, support for its legal status has reached new highs among Americans of all political stripes -- with Democrats at 76% support, independents at 64% and Republicans at 37%.

Support for Same-Sex Marriage, by Party

In general, Democrats have been the most likely to say gay marriage should be legal, and Republicans have been the least supportive. Independents typically fall in between but side closer to Democrats than to Republicans.

From a long-range perspective, Democrats' support has increased the most, by 43 points since 1996. That was the year Democratic President Bill Clinton signed into law the now-overturned Defense of Marriage Act (DOMA), which barred federal recognition of state-granted gay marriages. Since then, the Democratic Party has undergone a complete makeover on the issue, and its members have been champions of marriage equality on the state level. Democrats' support for same-sex marriage first reached majority level in 2004, the same year Massachusetts became the first state to legalize it.

Republicans have consistently been the least likely to say same-sex marriage should be legal, and their support has increased the least since 1996, by 21 points. Between 1996 and 2009, no more than 20% of Republicans believed same-sex marriages should be legally valid. Since then, support has ranged from 22% to 31%, leading up to this year's high of 37%.

The party divide between Democrats and Republicans may hinge largely on the age groups that compose each party. Gallup has found that younger Americans are significantly more likely to lean Democratic, while older Americans skew Republican. And while majorities of each age group under 65 support marriage equality in 2015, those aged 65 and older are still more likely to oppose it. This is a new phenomenon for the 50- to 64-year-old group. Last year, just 48% of these middle-aged Americans supported legally recognizing gay marriage. But in 2015, this figure has climbed to a majority of 54%.

One in Four Americans Say Candidates Must Share Their Views on Gay Marriage

About a quarter of Americans (26%) say they vote for a political candidate solely based on his or her stance on gay marriage. Many others say it is but one of several important factors (43%), and about one in four say it is not a major issue influencing how they vote (26%).

The 26% of American adults who say a candidate must share his or her views on the issue of same-sex marriage is up from just 16% in 2004 and 2008.

Importance of Candidates' Stance on Gay Marriage in How Americans Vote

Those who are opposed to gay marriage are a good deal more likely to say that a candidate's stance on the issue can make or break whether that candidate receives their vote (37%) than those who are supportive of gay marriage (21%). And both are more likely to say the issue is a defining factor than they have been in the past.

Americans Who Say Candidate Must Share Views on Gay Marriage

On both ends of the political spectrum, this could make same-sex marriage a more salient issue in the 2016 election than it has been previously. While pro-gay marriage voters are more likely to hold a political candidate's feet to the fire than in the past, there is an even larger bloc of anti-gay marriage voters who could reject a candidate for espousing marriage equality.


National support for marriage equality has been fairly steady in its upward climb, and is more than double what it was in 1996 when Gallup first polled on the issue. A clear majority of Americans now support the issue. The increase among Americans -- an increase seen in all major political parties -- comes in the midst of a string of legal victories ruling in favor of same-sex couples seeking to be treated equally under the law.

The Supreme Court may issue the final word on the constitutionality of same-sex marriage next month, although it's certainly possible that it may issue a narrow ruling on technical aspects of same-sex marriage law rather than say it should be legal in all states. With the ideological make-up of the court, it could decide that same-sex marriage is not a constitutionally supported right -- though this is a less likely outcome, and would go against prevailing public opinion.

While there has been uneven growth in support among Republicans versus Democrats, both groups have become more supportive. The remaining broad partisan divide, however, underscores how contentious the issue will continue to be as the 2016 election process unfolds.

As Hillary Clinton seeks the Democratic nomination in 2016, her support for gay marriage may be even more important as her party embraces the platform more closely than it has in the past. Clinton, like President Barack Obama, changed her stance in 2013 upon her exit from the State Department.

So far, none of the Republicans who have announced their 2016 candidacy support gay marriage, and neither have any potential candidates who are expected to officially throw their hats in the ring. Former Florida Gov. Jeb Bush, who is widely viewed as a top 2016 contender, recently doubled down on his stance against gay marriage -- a move consistent with the opinions of rank-and-file Republicans who, despite showing increased support for gay marriage, still oppose it outright. While an anti-same-sex marriage position should not present a challenge for GOP candidates in the primary, it could be more challenging in a general election setting given majority support among all Americans. At the same time, same-sex marriage, like many other moral issues, tends to rank well behind issues such as the economy, terrorism and education when Americans name the issues that are most likely to influence their vote.

Survey Methods

Results for this Gallup poll are based on telephone interviews conducted May 6-10, 2015, on the Gallup U.S. Daily survey, with a random sample of 1,024 adults, aged 18 and older, living in all 50 U.S. states and the District of Columbia. For results based on the total sample of national adults, the margin of sampling error is ±5 percentage points at the 95% confidence level. All reported margins of sampling error include computed design effects for weighting.

Each sample of national adults includes a minimum quota of 50% cellphone respondents and 50% landline respondents, with additional minimum quotas by time zone within region. Landline and cellular telephone numbers are selected using random-digit-dial methods.

View complete question responses and trends.

Learn more about how Gallup Poll Social Series works.

20 May 11:43

GALLUP POLL: Support For Same-Sex Marriage Hits Record High Of 60%

by Joe Jervis
Via Gallup:
Sixty percent of Americans now support same-sex marriage, as the Supreme Court prepares to rule on its constitutionality next month. This is up from 55% last year and is the highest Gallup has found on the question since it was first asked in 1996. Public support for the legality of same-sex marriage first reached a majority in 2011, when 53% supported it. Since then, support has ranged from 48% to 55%. The five-percentage-point increase in this year's Values and Beliefs poll, conducted May 6-10, is the largest year-to-year climb since 2011, when support rose by nine points. Support for the legality of gay marriages in the U.S. has been a fast-changing trend. Just two decades ago, only 27% of Americans backed gay marriage, while 68% opposed. By 2005, the percentage in favor had increased by 10 points to 37%, and by 2010 it had reached 44%.
Support among Republicans has climbed seven points to 37%.
20 May 13:48

Fake study on changing attitudes: Sometimes a claim that is too good to be true, isn’t

by Andrew Gelman

A faked study cools off the hypothesis that same-sex marriage attitudes have changed because people know more out gay people.

Screen Shot 2014-12-18 at 9.28.48 PM

A few months ago we reported on a recently published article, “When contact changes minds: An experiment on transmission of support for gay equality,” by Michael LaCour and Donald Green, that was being talked about a lot in political science. LaCour and Green had claimed that a brief conversation by a political canvasser was enough to effect a huge change in people’s attitudes on gay rights (see graphs above).

As I wrote at the time:

What stunned me about these results was not just the effect itself—although I agree that it’s interesting in any case—but the size of the observed differences. They’re huge: an immediate effect of 0.4 on a five-point scale and, after nine months, an effect of 0.8.

A difference of 0.8 on a five-point scale . . . wow! You rarely see this sort of thing. Just do the math. On a 1-5 scale, the maximum theoretically possible change would be 4. But, considering that lots of people are already at “4” or “5” on the scale, it’s hard to imagine an average change of more than 2. And that would be massive. So we’re talking about a causal effect that’s a full 40% of what is pretty much the maximum change imaginable. Wow, indeed. And, judging by the small standard errors (again, see the graphs above), these effects are real, not obtained by capitalizing on chance or the statistical significance filter or anything like that.

Not to spoil the suspense or anything, but what really happened was that the data were faked by first author LaCour. Co-author Green (my colleague at Columbia) had taken his collaborator’s data on faith; once he found out, he firmly retracted the article. Details at Retraction Watch.

It would be easy to criticize Green for not looking at the data more carefully, but . . . that’s easy to say after the fact. In all my collaborations, I’ve never even considered the possibility that I might be working with a Diederik Stapel. And, indeed, in my previous post on the topic, I expressed surprise at the published claim but no skepticism.

Ironically, LaCour benefited (in the short term) by his strategy of completely faking it. If he’d done the usual strategy of taking real data and stretching out the interpretation, I and others would’ve been all over him for overinterpreting his results, garden of forking paths, etc. But, by doing the Big Lie, he bypassed all those statistical concerns. Note my comment above, “judging by the small standard errors (again, see the graphs above), these effects are real, not obtained by capitalizing on chance . . .” It was easy for me to be skeptical of the claim that subliminal smiley-faces influence political attitudes, given that the data that were used as evidence didn’t strongly support the hype (which, to be clear, was in any case not being hyped by the author of the study, who made only modest, reasonable claims from his experiment). But those were real data. In LaCour’s case, he was able to shape the data to get devastatingly strong evidence.

The message, I suppose, is to be aware of the possibility that someone’s faking their data, next time I see an effect that’s stunningly large.

The funny thing was, I did come up with a story as to how this implausible effect could’ve occurred:

Public opinion on same-sex marriage and other gay-rights issues has been very fluid during the past 15 years, especially so during the period of the survey. Lots of Californians were going to change their opinion to be more favorable to gay marriage, and average opinions were moving steadily in this direction. The experimental condition kicked people faster along this path. . . . That is, I see the effect of the treatment not as shifting people’s attitudes but rather as changing the timing of attitude shifts that were in the process of occurring.

When I shared this theory with Green, several months ago, he agreed that it made sense but he noted that the data did not see such a large shift in the control group, which suggested that not everyone was shifting in that way. In retrospect, of course, everything makes sense, given that the data were fabricated.

It’s an interesting aspect of science, that we can work hard and come up with stories for anything. Indeed, I published a political science paper several years ago that I later retracted, not because the data were faked, but because we had miscoded one of the variables, and it completely destroyed our analyses and conclusions. So these things happen. Another example would be the “cold fusion” fiasco from 1989, where physicists jumped to explain the stunning experimental results. Lots of theoretical ideas, then it turned out the claimed results never happened. But that’s the way theory goes: you stretch your ideas to explain the unexpected, then see what happens next.

A bit of good news from this case is how quickly it was resolved. The article came out in December 2014, the fraud was uncovered in May 2015, and the paper has already been retracted. And the incentives are all in line for us to be more careful about such claims in the future. The article appeared in the journal Science, which, along with Nature and PNAS, is sometimes called a “tabloid” because of its pattern of publishing dramatic but fishy claims (at least in social science; I can’t comment on the contents of these journals in biology, chemistry, physics, etc.). Maybe this event will shake them into being a little less about flash and more about substance.

19 May 14:52

Bad Comparisons with Statistics

by markcc

off by 5 orders of magnitude, no big deal.

When a friend asks me to write about something, I try do it. Yesterday, a friend of mine from my Google days, Daniel Martin, sent me a link, and asked to write about it. Daniel isn’t just a former coworker of mine, but he’s a math geek with the same sort of warped sense of humor as me. He knew my blog before we worked at Google, and on my first Halloween at Google, he came to introduce himself to me. He was wearing a purple shirt with his train ticket on a cord around his neck. For those who know any abstract algebra, get ready to groan: he was purple, and he commuted. He was dressed as an Abelian grape.

Anyway, Daniel sent me a link to this article, and asked me to write about the error in it.

The real subject of the article involves a recent twitter-storm around a professor at Boston University. This professor tweeted some about racism and history, and she did it in very blunt, not-entirely-professional terms. The details of what she did isn’t something I want to discuss here. (Briefly, I think it wasn’t a smart thing to tweet like that, but plenty of white people get away with worse every day; the only reason that she’s getting as much grief as she is is because she dared to be a black woman saying bad things about white people, and the assholes at Breitbart used that to fuel the insatiable anger and hatred of their followers.)

But I don’t want to go into the details of that here. Lots of people have written interesting things about it, from all sides. Just by posting about this, I’m probably opening myself up to yet another wave of abuse, but I’d prefer to avoid and much of that as I can. Instead, I’m just going to rip out the introduction to this article, because it makes a kind of incredibly stupid mathematical argument that requires correction. Here are the first and second paragraphs:

There aren’t too many African Americans in higher education.

In fact, black folks only make up about 4 percent of all full time tenured college faculty in America. To put that in context, only 14 out of the 321—that’s about 4 percent—of U.S. astronauts have been African American. So in America, if you’re black, you’ve got about as good a chance of being shot into space as you do getting a job as a college professor.

Statistics and probability can be a difficult field of study. But… a lot of its everyday uses are really quite easy. If you’re going to open your mouth and make public statements involving probabilities, you probably should make sure that you at least understand the first chapter of “probability for dummies”.

This author doesn’t appear to have done that.

The most basic fact of understanding how to compare pretty much anything numeric in the real world is that you can only compare quantities that have the same units. You can’t compare 4 kilograms to 5 pounds, and conclude that 5 pounds is bigger than 4 kilograms because 5 is bigger than four.

That principle applies to probabilities and statistics: you need to make sure that you’re comparing apples to apples. If you compare an apple to a grapefruit, you’re not going to get a meaningful result.

The proportion of astronauts who are black is 14/321, or a bit over 4%. That means that out of every 100 astronauts, you’d expect to find four black ones.

The proportion of college professors who are black is also a bit over 4%. That means that out of every 100 randomly selected college professors, you’d expect 4 to be black.

So far, so good.

But from there, our intrepid author takes a leap, and says “if you’re black, you’ve got about as good a chance of being shot into space as you do getting a job as a college professor”.

Nothing in the quoted statistic in any way tells us anything about anyone’s chances to become an astronaut. Nothing at all.

This is a classic statistical error which is very easy to avoid. It’s a unit error: he’s comparing two things with different units. The short version of the problem is: he’s comparing black/astronaut with astronaut/black.

You can’t derive anything about the probability of a black person becoming an astronaut from the ratio of black astronauts to astronauts.

Let’s pull out some numbers to demonstrate the problem. These are completely made up, to make the calculations easy – I’m not using real data here.

Suppose that:

  • the US population is 300,000,000;
  • black people are 40% of the population, which means that there are are 120,000,000 black people.
  • there are 1000 universities in America, and there are 50 faculty per university, so there are 50,000 university professors.
  • there are 50 astronauts in the US.
  • If 4% of astronauts and 4% of college professors are black, that means that there are 2,000 black college professors, and 2 black astronauts.

In this scenario, as in reality, the percentage of black college professors and the percentage of black astronauts are equal. What about the probability of a given black person being a professor or an astronaut?

The probability of a black person being a professor is 2,000/120,000,000 – or 1 in 60,000. The probability of a black person becoming an astronaut is just 2/120,000,000 – or 1 in 60 million. Even though the probability of a random astronaut being black is the same as a the probability of a random college professor being black, the probability of a given black person becoming a college professor is 10,000 times higher that the probability of a given black person becoming an astronaut.

This kind of thing isn’t rocket science. My 11 year old son has done enough statistics in school to understand this problem! It’s simple: you need to compare like to like. If you can’t understand that, if you can’t understand your statistics enough to understand their units, you should probably try to avoid making public statements about statistics. Otherwise, you’ll wind up doing something stupid, and make yourself look like an idiot.

(In the interests of disclosure: an earlier version of this post used the comparison of apples to watermelons. But given the racial issues discussed in the post, that had unfortunate unintended connotations. When someone pointed that out to me, I changed it. To anyone who was offended: I am sorry. I did not intend to say anything associated with the racist slurs; I simply never thought of it. I should have, and I shouldn’t have needed someone to point it out to me. I’ll try to be more careful in the future.)

18 May 12:30

Everyone is missing the most important part of Louis C.K.'s SNL monologue

by Max Fisher

Comedian Louis C.K.'s monologue on this week's Saturday Night Live does not quite rise to the level of an Upworthy headline — he did not obliterate inequality; he won't change how you see poverty forever — but the first couple of minutes, discussing what he calls "mild racism," do make a decent point that is worth your time.

"I'm not racist. However, I do have mild racism," he says, explaining that he can't help but take mental note of people of color when he encounters them. He describes his initial reaction — approval at seeing a Chinese or Indian doctor, anxiety at seeing a young black man "unless he has a big smile on his face"  — that betrays an unmistakable, knee-jerk racism.

He's talking about implicit racial bias: "when, despite our best intentions and without our awareness, racial stereotypes and assumptions creep into our minds and affect our actions," as my colleague Jenée Desmond-Harris explained.

Thirty years of neurology and cognitive psychology studies show that it influences the way we see and treat others, even when we're absolutely determined to be, and believe we are being, fair and objective.

The idea of implicit racial bias, no matter how well-established by empirical research, is still controversial. People don't like to think that they could be racist; they prefer to divide the world into a binary of "racist" or "not racist," with themselves in the latter category. But that makes it a lot harder to address the effects of implicit bias, which impact everything from hiring to police conduct.

Louis C.K., by teasing himself for his well-intentioned "mild racism" and explaining it as a product of the environment he grew up in, is making it a little less scary to acknowledge implicit bias. It's reframing it such that people can be told about implicit bias without hearing an accusation they feel the need to deflect. That's a helpful step toward addressing the issue.

The rest of Louis C.K.'s monologue centered on a fairly ham-fisted Israel-Palestine metaphor and a weak bit on child molestation — the joke was that it's awkward to joke about child molestation and more awkward still to acknowledge that child molestation must be enjoyable for child molesters, ha ha — that was clearly designed to draw controversy and certainly did.

Like this video? Subscribe to Vox on YouTube.

18 May 22:16

Twitter Exchange Of The Day

by Joe Jervis
15 May 21:00

He’s Never Finn-ished

by BD
Electronics Store | Helsinki, Finland

(A customer is buying a few items. Due to a misunderstanding that isn’t in any way my fault he almost ends up buying the wrong model of a popular phone. I apologize nonetheless and the matter is resolved. Since the misunderstanding he’s had a chip on his shoulder. He is watching me very closely and trying to catch me making a mistake. To his disgruntlement everything goes smoothly.)

Me: “…and there you go. Have a pleasant day.”

Customer: “Hmph… Boy, let me ask you a question. Where are you from?”

Me: “Helsinki.”

Customer: “No, no, no… I mean originally?”

Me: “Helsinki.”

Customer: “Stop being a smart a**! What country are you or your parents originally from?!”

(I’m a bit baffled by this question since I’m blond, have blue eyes, and a light skin. I look pretty much your stereotypical Finn.)

Me: “Finland… Um, what are you trying to ask exactly.”

Customer: “Look here, brat, stop playing this game with me! I know you immigrants like to think of yourselves as Finns, but you are not and never will be. Stop dodging my question and answer me! Where is your family from?!”

Me: “Sir, please calm yourself. I am not playing games. As far as I know my family has always lived in Finland. What makes you doubt—”

Customer: “Impossible! That just can’t be true.”

Me: “Well, sure if you are talking about the dawn of humanity; then everyone is from Africa, I guess. But my grandfather was actually a genealogy enthusiast, and he found out that our family has been living in Finland at least since the 17th century. That is pretty much as early as is possible to find any written records from Finland.”

Customer: “No, you are lying!”

Me: “Okay, look at me then. Where do you think I am from?”

Customer: “That is not for me to know but for you to tell me! I am a customer!”

(A coworker is standing at another counter with his back to us. He is of Chinese descent but was born in Finland. Only the back of his head is visible to us and he has dyed his hair blonde. He hasn´t heard my conversation with the customer.)

Customer: *shouting to my coworker* “Hey! You! Clerk over there!”

(My coworker turns to us a little surprised because of the shouting. The customer frowns as he sees my coworkers face.)

Customer: *mumbling to himself* “… Oh, god, another one.”

Coworker: “Yes, sir, can I help you?”

Customer: *pointing at me* “What country is this boy from?”

Coworker: “Umm… Finland.”

Customer: “NO! You lying son of a… Where are YOU from?!”

Coworker: “Born and raised in Helsinki.”

(The customer screams incoherently and slams his hand on the counter.)

Customer: “You are all liars, thieves, and tax dodging lazy immigrants who should never have been let into this country! I am reporting you to the consumer authority and immigration officials! This will not stand, mark my words!”

(He storms off.)

Coworker: “What was that about?”

Me: “I have absolutely no idea. A hidden camera show or an episode of The Twilight Zone are the only explanations I can think of.”

(Luckily we never heard from him again.)

13 May 19:02

Welcome To The Year 2112

by Joe Jervis
Hey, he's only off by 97 years. But maybe this kind of math is what caused Wisconsin's $2B deficit.
12 May 12:50

For the love of God, rich people, stop giving Ivy League colleges money

by Dylan Matthews

A big gift to Yale makes one think about where to give money--and Ivy League schools look to be at the bottom of this writer's list.

Stephen Schwarzman — the billionaire CEO of private equity giant Blackstone who once compared the Obama administration's proposal to close the carried interest tax loophole to the Nazi invasion of Poland — has decided to give $150 million to Yale to build a performing arts center.

The primary beneficiaries of this will be Yale students, and Yale students are stupid rich. Not as rich as Schwarzman (who's worth $13.2 billion, per Bloomberg, of which he's donating a little more than 1 percent), but rich. Only 52 percent of Yale students receive financial aid from the university. Meanwhile, 100 percent of families making under $100,000, and 99 percent of families making between $100,000 and $200,000, qualified for aid. In other words, you need to make a ridiculously large amount of money to not qualify for aid. You have to be in the 1 percent or very close. And almost half of Yale students are just that rich.

Yale students are also smart. The middle range of SAT scores for the latest freshman class ranged from 2120 to 2390 out of 2400. For comparison, the average score nationwide is a little under 1500. The SATs are basically just an IQ test, and while IQ isn't everything, it is correlated with higher income, better health, and longer life expectancy — even after you control for socioeconomic factors. And IQ, of course, isn't something anyone chooses. Yale students just lucked into good genes and/or good childhood environments that, due to no hard work of their own, will pay considerable dividends.

Yale as an institution is also super-rich. Its endowment totaled $23.9 billion as of June 30, 2014, trailing only Harvard. As it turns out, the school can already afford performing arts facilities, and a whole lot of them at that. Here's the 654-seat University Theater, used both for undergraduate plays and by the School of Drama:

Yale University's University Theatre

Yale University

Not big enough for ya? Try the 2,650-seat Woolsey Hall, used by the undergraduate orchestra and the School of Music:

Yale University Woolsey Hall

Yale University

But thanks to Schwarzman the school will get yet another one.

The New York Times's Robin Pogrebin describes Schwarzman's contribution as an "act of philanthropy." It is not. Sure, it's not the absolute worst thing one could do with one's money. I suppose it's a bit better than literally piling $150 million in dollar bills together in one location and then setting them on fire, insofar as building a performing arts center employs more people than assembling a massive money pile would. It's definitely better than using the money to set up a private island upon which to hunt man for sport.

But it's hard to imagine a worse way to use the money that still entitles Schwarzman to a charitable tax deduction. Yale is not a charity. It is a finishing school that overwhelmingly serves children of wealth and privilege. Supporting its scientific and particularly biomedical research is worthwhile, but the school is already far richer than all but one of its peer institutions and has access to considerable federal funds in that area, as well. And, of course, Schwarzman isn't supporting Yale's biomedical research. He's giving its dancers a nicer stage upon which to pirouette.

Two kids in Kenya show they've swallowed their deworming pills. (Good Ventures / Innovations for Poverty Action)

Literally any other charity, save maybe Harvard, is a better choice. Schwarzman could give $150 million to distribute bednets in sub-Saharan Africa, a highly cost-effective way to save lives. He could give $150 million directly to poor people in Kenya and Uganda through GiveDirectly. He could give $150 million to deworming efforts that spare children ailments that can cause immense pain and poverty. He could give $150 million to the Open Philanthropy Project or the Gates Foundation or another group doing careful, rigorous work to determine the best ways to use charitable resources to make the world a better place. He could, in fact, do all of the above because he's crazy stupid rich.

Of course, even the most generous among the rich spend heavily on themselves. Bill Gates may hope to spend down his fortune by fighting HIV and malaria in the developing world, but he also found the money to buy his daughter a 228-acre horse farm with 121 stalls, a race track, and staff lodging for up to 32 people. And maybe that's how Schwarzman's profligacy is best interpreted. He's a Yale alum, and this donation clearly provides some kind of emotional benefit to him.

But it's not philanthropy. It's not helping people who need help, and it's obscene that Schwarzman is getting a massive tax write-off for it. Giving to Yale is not an act of altruism. It's a gigantic, immoral waste of money, and it's long past time we started treating it as such.

Correction: This post originally stated that GiveDirectly works in Tanzania; it's so far just in Kenya and Uganda. Still better than Yale.

WATCH: How the rich stole the recovery in one chart
12 May 17:32

There Is No ‘Blue Wall’

by Nate Silver

Counterfactual electoral thinking.

If you were browsing campaign coverage at this point in advance of the 1992 election, you’d be reading a lot about the Republicans’ impregnable “red wall.” OK — it wouldn’t have been called the “red wall” (the association of Republicans with red states and Democrats with blue states came about more recently). But you’d have been reading a lot about Republicans’ supposed “lock” on the Electoral College.

The argument was something like this. During the past six presidential elections, from 1968 through 1988, 21 states voted Republican every time. These included almost all states in the fast-growing West — most importantly, California and its trove of electoral votes — along with some wealthy, suburban states (Illinois, New Jersey and Virginia) and a couple of traditionally Republican states in New England (including Vermont).26


Together, these states accounted for 191 electoral votes. They weren’t quite enough to clinch victory for the Republicans, the argument went, but they put the GOP at a substantial advantage. The Democrats, who’d won only the District of Columbia every time, had to win more than three-quarters of the electoral votes from the remaining, “competitive” states.

You probably know what happened next. In 1992, Bill Clinton won nine Republican “lock” states, containing 118 of the red wall’s 191 electoral votes, en route to a 370-168 overall Electoral College victory. Four of the GOP “lock” states — California, Illinois, New Jersey and Vermont — haven’t voted Republican since. Democrats obliterated the red wall.


Nowadays, of course, it’s become common to hear talk about the “blue wall” — the set of 18 states that, along with the District of Columbia, have voted for the Democrat in each of the most recent six presidential elections, from 1992 through 2012. Together, they represent 242 electoral votes. Many pundits, ignoring the lessons of history, claim the “blue wall” or some close variation of it puts the Democratic nominee (likely Hillary Clinton) at a substantial advantage for 2016.

The error that these commentators are making is in attributing the Democrats’ recent run of success to the Electoral College. In fact, the Electoral College has been a minor factor, if it’s helped Democrats at all, and one probably best ignored until the late stages of a close presidential race.

But wait. Wasn’t Barack Obama’s margin in the Electoral College in 2012 — 332 electoral votes, to Mitt Romney’s 206 — awfully impressive given that he won the popular vote by only a few percentage points?

Actually, it was pretty much par for the course. The nature of the Electoral College is to accentuate small margins in the popular vote; Obama’s electoral vote tallies have been fine, but historically ordinary.

In the chart below, I’ve plotted the past century’s worth of presidential elections (from 1916 to 2012). The horizontal axis shows the Democrat’s margin of victory or defeat in the popular vote and the vertical axis how many electoral votes he received. (Totals are prorated to 53827 electoral votes, the current total.)28 Then I’ve drawn an “S”-shaped curve to show the long-term relationship.


Based on the past century’s worth of data, you’d expect a Democrat who won the popular vote by 3.9 percentage points — as Obama did against Romney — to win about 330 electoral votes. That pretty much exactly matches Obama’s 332.

And you’d expect a Democrat who won the popular vote by 7.3 percentage points, as Obama did in 2008 against John McCain, to claim about 380 electoral votes. Obama won 365 that year instead. By comparison, when F.D.R. won the 1944 popular vote by 7.5 percentage points, he won the Electoral College 432-99.

So when commentators talk about the Democrats’ “blue wall,” all they’re really pointing out is that Democrats have had a pretty good run in presidential elections lately. And they have, if you conveniently draw the line at 1992 (it doesn’t sound so impressive to instead say Democrats have won five of the 12 elections since 1968). During that time, Democrats have won four elections pretty clearly, lost one narrowly and essentially tied the sixth. This has been evident from the popular vote, however. The one time the Electoral College really mattered — that was 2000, of course — it hurt the Democrats.

Republicans, in all likelihood, would have won by similar Electoral College margins if they’d done as well as the Democrats in the popular vote, casting all sorts of cracks in the blue wall. Suppose, for instance, that Romney, rather than Obama, had won the 2012 election by 3.9 percentage points. What would the map have looked like?


It would have looked pretty red. A 3.9-point Romney victory represents a 7.8-point swing from the actual result. So if the swing were distributed uniformly, Obama would have lost every state that he won by 7.8 percentage points or less. That means he’d have lost three “blue wall” states — Minnesota, Pennsylvania and Wisconsin — along with Colorado, Florida, Iowa, Nevada, New Hampshire, Ohio and Virginia.

An alternative technique is to assume that some states are more “elastic” than others, meaning that they contain more swing voters and are more responsive to national trends. Based on our previous calculations, for instance, New Hampshire is relatively elastic — its vote swings a lot — whereas Georgia (where there are a lot of black voters and a lot of conservative whites and not many voters in between) is less so.

But running the calculation that way changes the outcome of just one electoral vote. Romney would have narrowly won the rural, 2nd Congressional District of Maine, which like Nebraska awards one electoral vote to the winner of each congressional district, in addition to his other spoils. That would have put Romney on 332 electoral votes — exactly the total that Obama finished with, as it happens.

Where you’d see more evidence of an impregnable “blue wall” is in the event of an extremely lopsided election. Suppose that Democrats had lost the 2012 election by 18.2 percentage points, as Walter Mondale did to Ronald Reagan. In 1984, Mondale won only 13 electoral votes — 10 from his home state of Minnesota, plus D.C.’s three. In contrast, I estimate that a Democrat losing by that margin now would still carry states totaling 104 electoral votes: California, New York, Maryland, Vermont, Hawaii, and D.C. That’s a reflection of the fact that the country is far more geographically polarized than it once was.

But this cuts both ways. If Obama had won by Reagan’s 1984 margin, Republicans would still have won Texas, Tennessee, Alabama, South Carolina, Kentucky, Louisiana, Oklahoma, Arkansas, Kansas, Mississippi, Utah, Nebraska,29 West Virginia, Idaho, North Dakota, South Dakota and Wyoming — for 136 electoral votes. Neither party’s slate is close to a majority, however, and they mostly offset each other. (In the chart below, I’ve run this calculation for all possible outcomes, up to a 30-point win for either party.)


What about in the event of an extremely close election, instead of a lopsided one? Another election as close as 2000, for instance?

Obama would probably have won such an election in 2012. My method has him winning the Electoral College 285-253 in the event of an exactly tied popular vote, for instance.

But it would have had to be very close indeed. If Obama had lost the popular vote by just 1 percentage point, for instance, I have him losing the election 279-259, as highly elastic states like New Hampshire and Colorado would have fallen out of his column.

The net impact of this is minor. Our Election Day forecast in 2012 estimated that there was about a 5 percent chance that Obama would win the Electoral College but lose the popular vote (and about a 1 percent chance that Romney would do so).30 So if you want to argue that Hillary Clinton’s chances of winning the popular vote next year are 50 percent but that her Electoral College chances are more like 53 percent or 55 percent instead, go ahead — that’s probably about what the “blue wall” amounts to. (And even that advantage is tenuous, possibly reflecting Obama’s superior turnout operation in swing states — an edge that Clinton might or might not replicate.)

Hey, look: I can get carried away, too.31 If the 2016 election turns out to be close, we’ll be sweating the small stuff by October and November. The difference between a 50 percent and a 55 percent chance of victory for Clinton or Marco Rubio or whomever because of Electoral College dynamics will seem like a pretty big deal.

But for now? The Electoral College just isn’t worth worrying about much. If you see analysts talking about the “blue wall,” all they’re really saying is that Democrats have won a bunch of presidential elections lately — an obvious fact that probably doesn’t have much predictive power for what will happen this time around.

I’m not saying Clinton is doomed. Rather, I think the “fundamentals” point toward her chances being about 50-50, and I wouldn’t argue vigorously if you claimed the chances were more like 60-40 in one or the other direction. But Clinton is no sort of lock, and if she loses the popular vote by even a few percentage points, the “blue wall” will seem as archaic as talk of a permanent Republican majority.

12 May 17:52

A Top Cardiologist Says A Diet Drug Maker Misled Patients And Investors


And in the some Drug companies are evil column, we have this story.

A top cardiologist is accusing a drug maker with whom he worked closely of making misleading statements about its weight loss medicine and refusing to release data that would undermine its previous claims.

The pill, Contrave, was prescribed 117,000 times in its first three months on the market, a better launch than any obesity drug in the past decade. Things seemed to get even better in March, when Contrave’s maker, Orexigen Therapeutics, released data that claimed that the drug not only helped patients lose weight but also prevented heart attacks, strokes and deaths caused by cardiovascular disease by a stunning 41%, a statistically significant amount.

But Steven Nissen, chair of cardiology at the Cleveland Clinic and the lead researcher on the 9,000-patient study, immediately protested that the result was premature and untrustworthy. So did John Jenkins, the head of the Office of New Drugs at the Food and Drug Administration. Now, via a Cleveland Clinic press release and an interview with Forbes, Nissen, well known for his role in raising alarms about the safety of drugs like Merck’s Vioxx and GlaxoSmithKline’s Avandia, has released the data to back up that contention.

The supposed benefit almost disappeared as time passed. As the number of heart attacks, strokes, and cardiovascular deaths increased from 94 to 192, the 41% difference between those getting Contrave and those getting placebo shrank to just 12% and was no longer statistically significant. More evidence it may be meaningless: patients given Contrave were more likely to have chest pain and more likely to die from causes other than heart disease. Many patients who supposedly had a benefit had actually stopped taking the drug.

“Patients were misled, investors were misled,” says Nissen. “It is so critically important that investors and other people understand why early data in a trial are unreliable or unstable.”

Worse, Orexigen has sought to block the release of the new data. On March 26, the study’s executive committee voted unanimously that the study should be stopped and the data publicized. But for six weeks Orexigen repeatedly refused to approve a press release, Nissen says. Today, Orexigen and its marketing partner, Takeda, issued a release saying only that the study was stopped. The Cleveland Clinic’s release, unauthorized by the companies and filled with data, followed minutes later.

Orexigen did not immediately return multiple phone calls and emails requesting comment.

Contrave is a combination of two old drugs drugs: bupropion, the active ingredient in the antidepressant Wellbutrin, and naltrexone, which is used to treat alcohol addiction. In one study, 42% of patients treated with Contrave lost at least 5% of their body weight, compared with 17% of patients treated with a placebo. But for many patients, the drug does not work; the average weight loss is just 4% better than a placebo.

Because bupropion boosts blood pressure, the FDA wanted to be sure that Contrave did not increase the risk of heart attacks or strokes. So it required Orexigen to conduct the study Nissen was to run, called LIGHT. Nissen, and his group at the Cleveland Clinic were brought on in part because of his reputation for drug safety – if he proved the drug was safe, other doctors would be more likely to believe it.

The trial was designed to prove that Contrave caused no more than a 40% increase in heart attacks, strokes, and cardiovascular deaths (euphemistically called ‘cardiovascular events’ by doctors). It was expected to take until 2017.

But the FDA allowed Contrave onto the market sooner, using a regulatory mechanism that had originally been proposed by Nissen himself. Orexigen could peek at the data 25% of the way through and use that data to file with the FDA if it showed that Contrave was doing no worse than doubling the risk of a cardiovascular event. The deal, however, came with strings.

Nissen says that four Orexigen executives signed a 14-page document that said that the company would take extreme steps to make sure that only a carefully selected regulatory SWAT team would see the results. Specific areas of the company’s offices would be locked up to keep the results secret, and the team would not use Orexigen’s corporate email system to share them.

Something very different happened. According to FDA documents, more than 100 people saw the results, including top Orexigen officials, outside consultants, and lawyers. And, in a complete surprise, the results actually seemed to show that Contrave prevented cardiovascular events. The FDA chastised the company for allowing the data to leak, and said that the study could no longer be used to satisfy Orexigen’s regulatory requirement – the company would have to do a second study at an estimated cost of more than $100 million.

Instead of keeping the information quiet, Orexigen executives decided that they had a fiduciary duty to file for a patent based on the information. Nissen found out about the patent from Takeda executives, who had taken over the day-to-day management of the clinical trial from Orexigen. “I was livid,” he says. “I almost couldn’t believe it.”

When the patent was approved on March 3, Orexigen filed a document with the Securities and Exchange Commission that included the total number of cardiovascular events, and the preliminary estimate on the reduction in risk to patients. The company’s shares spiked 30%.

Even then, Nissen said, there were problems in the data that Orexigen was not disclosing to doctors, patients or investors. For one thing, it didn’t disclose that four more patients, a total of 29, had chest pain while on Contrave. But the more important issues had to do with the way clinical trials are analyzed.

Usually, researchers use a very tough standard, called intent-to-treat, for testing drug effectiveness. If a patient is randomly assigned to get a medicine, they are counted in the clinical trial even if they decide to stop taking it. Normally, this makes for a tougher test. Patients who stop taking a heart drug usually don’t get any benefit from it.

But in this case, probably because of the play of random chance, something different happened. In patients who stayed on Contrave or the placebo, there was a 30% reduction in the risk of heart attacks, strokes, or cardiovascular deaths. But in those who stopped taking the drug, the benefit was a much greater 53%. In other words, the results were better if patients were assigned to take Contrave, but then didn’t! The differences were dramatic. In the off-treatment group, there were 12 cardiovascular deaths in the placebo group and 1 in the Contrave group.

This isn’t surprising. When researchers want to take an early look at a clinical trial, the rules of statistical significance actually change. Stopping the study actually requires a far higher level of certainty the result is true, because the more often you look, the more likely it is that you’ll find the exact result you’re hoping for by chance.

Nissen and the FDA’s Jenkins warned that the result was probably spurious.  “Step back and think for a second,” Jenkins told me then.  “We required this study because we’re concerned that Contrave may cause adverse cardiovascular events because of its effect on blood pressure and heart rate. So the likelihood that that drug is going to have an early benefit is highly unlikely.”

The FDA initially worried that the results of the study had leaked to 100 people, and decided not to accept it. But now they had been released to the entire world. Nissen says he had hoped to continue to the end of the trial anyway, but that his executive committee faced another problem: patients were having fewer heart attacks and strokes than expected. In order for the trial to complete, doctors would have to enroll another 5,000 patients. It was futile.

The executive committee voted unanimously to recommend that the trial be stopped and that the data through to the halfway mark be released immediately. They told Orexigen, and were shown the more mature data. Not only did the cardiovascular benefit seem to vanish, but there was a difference in deaths from other causes: 26 deaths on Contrave versus 19 on placebo. The difference was not statistically significant.

Nissen started working with Takeda to craft a simple press release that would contain the more mature data. Each time Takeda would approve the release Orexigen would reject it. The process went on for six weeks, and even continued as Orexigen conducted an earnings call during which executives said that no decision about the study had been made.

Finally, he decided that the Cleveland Clinic would simply issue its own release, without the companies’ approval, after it was announced that the trial had stopped. This morning, that happened.

12 May 10:23

The art of the deal

"LES FEMMES D'ALGER [Women of Algiers] (Version 'O')", dating from 1955, the finest Picasso left in private hands, sold for £115.2m ($179.4m) at Christie's in New York on May 11th. The seller was said to be a Saudi Arabian collector, who had hung it in his London townhouse ever since he bought it in 1997 for £20.5m. Less than 30 minutes later, Alberto Giacometti's "L'homme au doigt" sold for £90.8m, including commission and taxes, an auction record for any sculpture.
The Picasso was the most expensive painting ever to be sold at auction (the third Picasso to have held that record in as many decades). It cost over a quarter more than Francis Bacon's “Three Studies of Lucian Freud”, which was bought by Elaine Wynn, co-founder of the Wynn casino empire, for £89.4m in November 2013. Fine-art prices are extraordinarily high relative to those a few decades or centuries ago, though after adjusting for inflation the increase looks less dramatic. But inflation cannot explain the dramatic rise in prices at the top of the market that has occurred in the last decade or so—or indeed in the last 18...Continue reading