Shared posts

17 Jul 15:21

A Short Guide to Hard Problems

by Kevin Hartnett
Dgfitch

get 2 kno ur complexity classes

How fundamentally difficult is a problem? That’s the basic task of computer scientists who hope to sort problems into what are called complexity classes. These are groups that contain all the computational problems that require less than some fixed amount of a computational resource — something like time or memory. Take a toy example featuring a large number such as 123,456,789,001. One might ask: Is this number prime, divisible only by 1 and itself? Computer scientists can solve this using fast algorithms — algorithms that don’t bog down as the number gets arbitrarily large. In our case, 123,456,789,001 is not a prime number. Then we might ask: What are its prime factors? Here no such fast algorithm exists — not unless you use a quantum computer. Therefore computer scientists believe that the two problems are in different complexity classes.

Many different complexity classes exist, though in most cases researchers haven’t been able to prove one class is categorically distinct from the others. Proving those types of categorical distinctions is among the hardest and most important open problems in the field. That’s why the new result I wrote about last month in Quanta was considered such a big deal: In a paper published at the end of May, two computer scientists proved (with a caveat) that the two complexity classes that represent quantum and classical computers really are different.

The differences between complexity classes can be subtle or stark, and keeping the classes straight is a challenge. For that reason, Quanta has put together this primer on seven of the most fundamental complexity classes. May you never confuse BPP and BQP again.


P

Stands for: Polynomial time

Short version: All the problems that are easy for a classical (meaning nonquantum) computer to solve.

Precise version: Algorithms in P must stop and give the right answer in at most ntime where is the length of the input and is some constant.

Typical problems:
• Is a number prime?
• What’s the shortest path between two points?

What researchers want to know: Is P the same thing as NP? If so, it would upend computer science and render most cryptography ineffective overnight. (Almost no one thinks this is the case.)


NP

Stands for: Nondeterministic Polynomial time

Short version: All problems that can be quickly verified by a classical computer once a solution is given.

Precise version: A problem is in NP if, given a “yes” answer, there is a short proof that establishes the answer is correct. If the input is a string, X, and you need to decide if the answer is “yes,” then a short proof would be another string, Y, that can be used to verify in polynomial time that the answer is indeed “yes.” (Y is sometimes referred to as a “short witness” — all problems in NP have “short witnesses” that allow them to be verified quickly.)

Typical problems:
• The clique problem. Imagine a graph with edges and nodes — for example, a graph where nodes are individuals on Facebook and two nodes are connected by an edge if they’re “friends.” A clique is a subset of this graph where all the people are friends with all the others. One might ask of such a graph: Is there a clique of 20 people? 50 people? 100? Finding such a clique is an “NP-complete” problem, meaning that it has the highest complexity of any problem in NP. But if given a potential answer — a subset of 50 nodes that may or may not form a clique — it’s easy to check.
• The traveling salesman problem. Given a list of cities with distances between each pair of cities, is there a way to travel through all the cities in less than a certain number of miles? For example, can a traveling salesman pass through every U.S. state capital in less than 11,000 miles?

What researchers want to know: Does P = NP? Computer scientists are nowhere near a solution to this problem.


PH

Stands for: Polynomial Hierarchy

Short version: PH is a generalization of NP — it contains all the problems you get if you start with a problem in NP and add additional layers of complexity.

Precise version: PH contains problems with some number of alternating “quantifiers” that make the problems more complex. Here’s an example of a problem with alternating quantifiers: Given X, does there exist Y such that for every Z there exists W such that R happens? The more quantifiers a problem contains, the more complex it is and the higher up it is in the polynomial hierarchy.

Typical problem:
• Determine if there exists a clique of size 50 but no clique of size 51.

What researchers want to know: Computer scientists have not been able to prove that PH is different from P. This problem is equivalent to the P versus NP problem because if (unexpectedly) P = NP, then all of PH collapses to P (that is, P = PH).


PSPACE

Stands for: Polynomial Space

Short version: PSPACE contains all the problems that can be solved with a reasonable amount of memory.

Precise version: In PSPACE you don’t care about time, you care only about the amount of memory required to run an algorithm. Computer scientists have proven that PSPACE contains PH, which contains NP, which contains P.

Typical problem:
• Every problem in P, NP and PH is in PSPACE.

What researchers want to know: Is PSPACE different from P?


BQP

Stands for: Bounded-error Quantum Polynomial time

Short version: All problems that are easy for a quantum computer to solve.

Precise version: All problems that can be solved in polynomial time by a quantum computer.

Typical problems:
• Identify the prime factors of an integer.

What researchers want to know: Computer scientists have proven that BQP is contained in PSPACE and that BQP contains P. They don’t know whether BQP is in NP, but they believe the two classes are incomparable: There are problems that are in NP and not BQP and vice versa.


EXPTIME

Stands for: Exponential Time

Short version: All the problems that can be solved in an exponential amount of time by a classical computer.

Precise version: EXP contains all the previous classes — P, NP, PH, PSPACE and BQP. Researchers have proven that it’s different from P — they have found problems in EXP that are not in P.

Typical problem:
• Generalizations of games like chess and checkers are in EXP. If a chess board can be any size, it becomes a problem in EXP to determine which player has the advantage in a given board position.

What researchers want to know: Computer scientists would like to be able to prove that PSPACE does not contain EXP. They believe there are problems that are in EXP that are not in PSPACE, because sometimes in EXP you need a lot of memory to solve the problems. Computer scientists know how to separate EXP and P.


BPP

Stands for: Bounded-error Probabilistic Polynomial time

Short version: Problems that can be quickly solved by algorithms that include an element of randomness.

Precise version: BPP is exactly the same as P, but with the difference that the algorithm is allowed to include steps where its decision-making is randomized. Algorithms in BPP are required only to give the right answer with a probability close to 1.

Typical problem:
• You’re handed two different formulas that each produce a polynomial that has many variables. Do the formulas compute the exact same polynomial? This is called the polynomial identity testing problem.

What researchers want to know: Computer scientists would like to know whether BPP = P. If that is true, it would mean that every randomized algorithm can be de-randomized. They believe this is the case — that there is an efficient deterministic algorithm for every problem for which there exists an efficient randomized algorithm — but they have not been able to prove it.

05 Jul 21:11

SSC Journal Club: Dissolving The Fermi Paradox

by Scott Alexander
Dgfitch

WHY AM I SO ALONE??? Oh wait, science can explain.

I’m late to posting this, but it’s important enough to be worth sharing anyway: Sandberg, Drexler, and Ord on Dissolving the Fermi Paradox.

(You may recognize these names: Toby Ord founded the effective altruism movement; Eric Drexler kindled interest in nanotechnology; Anders Sandberg helped pioneer the academic study of x-risk, and wrote what might be my favorite Unsong fanfic)

The Fermi Paradox asks: given the immense number of stars in our galaxy, for even a very tiny chance of aliens per star shouldn’t there should be thousands of nearby alien civilizations? But any alien civilization that arose millions of years ago would have had ample time to colonize the galaxy or do something equally dramatic that would leave no doubt as to its existence. So where are they?

This is sometimes formalized as the Drake Equation: think up all the parameters you would need for an alien civilization to contact us, multiply our best estimates for all of them together, and see how many alien civilizations we predict. So for example if we think there’s a 10% chance of each star having planets, a 10% chance of each planet being habitable to life, and a 10% chance of a life-habitable planet spawning an alien civilization by now, one in a thousand stars should have civilization. The actual Drake Equation is much more complicated, but most people agree that our best-guess values for most parameters suggest a vanishingly small chance of the empty galaxy we observe.

SDO’s contribution is to point out this is the wrong way to think about it. Sniffnoy’s comment on the subreddit helped me understand exactly what was going on, which I think is something like this:

Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilization. If it came up tails, He made none besides Earth. Using our one parameter Drake Equation, we determine that on average there should be 5 billion alien civilizations. Since we see zero, that’s quite the paradox, isn’t it?

No. In this case the mean is meaningless. It’s not at all surprising that we see zero alien civilizations, it just means the coin must have landed tails.

SDO say that relying on the Drake Equation is the same kind of error. We’re not interested in the average number of alien civilizations, we’re interested in the distribution of probability over number of alien civilizations. In particular, what is the probability of few-to-none?

SDO solve this with a “synthetic point estimate” model, where they choose random points from the distribution of possible estimates suggested by the research community, run the simulation a bunch of times, and see how often it returns different values.

According to their calculations, a standard Drake Equation multiplying our best estimates for every parameter together yields a probability of less than one in a million billion billion billion that we’re alone in our galaxy – making such an observation pretty paradoxical. SDO’s own method, taking account parameter uncertainty into account, yields a probability of one in three.

They try their hand at doing a Drake calculation of their own, using their preferred values, and find:

N is the average number of civilizations per galaxy

If this is right – and we can debate exact parameter values forever, but it’s hard to argue with their point-estimate-vs-distribution-logic – then there’s no Fermi Paradox. It’s done, solved, kaput. Their title, “Dissolving The Fermi Paradox”, is a strong claim, but as far as I can tell they totally deserve it.

“Why didn’t anyone think of this before?” is the question I am only slightly embarrassed to ask given that I didn’t think of it before. I don’t know. Maybe people thought of it before, but didn’t publish it, or published it somewhere I don’t know about? Maybe people intuitively figured out what was up (one of the parameters of the Drake Equation must be much lower than our estimate) but stopped there and didn’t bother explaining the formal probability argument. Maybe nobody took the Drake Equation seriously anyway, and it’s just used as a starting point to discuss the probability of life forming?

But any explanation of the “oh, everyone knew this in some sense already” sort has to deal with that a lot of very smart and well-credentialled experts treated the Fermi Paradox very seriously and came up with all sorts of weird explanations. There’s no need for sci-fi theories any more (though you should still read the Dark Forest trilogy). It’s just that there aren’t very many aliens. I think my past speculations on this, though very incomplete and much inferior to the recent paper, come out pretty well here.

(some more discussion here on Less Wrong)

One other highlight hidden in the supplement: in the midst of a long discussion on the various ways intelligent life can fail to form, starting on page 6 the authors speculate on “alternative genetic systems”. If a planet gets life with a slightly different way of encoding genes than our own, it might be too unstable to allow complex life, or too stable to allow a reasonable rate of mutation by natural selection. It may be that abiogenesis can only create very weak genetic codes, and life needs to go through several “genetic-genetic transitions” before it can reach anything capable of complex evolution. If this is path-dependent – ie there are branches that are local improvements but close off access to other better genetic systems – this could permanently arrest the development of life, or freeze it at an evolutionary rate so low that the history of the universe so far is too short a time to see complex organisms.

I don’t claim to understand all of this, but the parts I do understand are fascinating and could easily be their own paper.

29 May 15:10

To Build Truly Intelligent Machines, Teach Them Cause and Effect

by Kevin Hartnett

Artificial intelligence owes a lot of its smarts to Judea Pearl. In the 1980s he led efforts that allowed machines to reason probabilistically. Now he’s one of the field’s sharpest critics. In his latest book, “The Book of Why: The New Science of Cause and Effect,” he argues that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.

Three decades ago, a prime challenge in artificial intelligence research was to program machines to associate a potential cause to a set of observable conditions. Pearl figured out how to do that using a scheme called Bayesian networks. Bayesian networks made it practical for machines to say that, given a patient who returned from Africa with a fever and body aches, the most likely explanation was malaria. In 2011 Pearl won the Turing Award, computer science’s highest honor, in large part for this work.

But as Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently.

In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions — to inquire how the causal relationships would change given some kind of intervention — which Pearl views as the cornerstone of scientific thought. Pearl also proposes a formal language in which to make this kind of thinking possible — a 21st-century version of the Bayesian framework that allowed machines to think probabilistically.

Pearl expects that causal reasoning could provide machines with human-level intelligence. They’d be able to communicate with humans more effectively and even, he explains, achieve status as moral entities with a capacity for free will — and for evil. Quanta Magazine sat down with Pearl at a recent conference in San Diego and later held a follow-up interview with him by phone. An edited and condensed version of those conversations follows.

Why is your new book called “The Book of Why”?

It means to be a summary of the work I’ve been doing the past 25 years about cause and effect, what it means in one’s life, its applications, and how we go about coming up with answers to questions that are inherently causal. Oddly, those questions have been abandoned by science. So I’m here to make up for the neglect of science.

That’s a dramatic thing to say, that science has abandoned cause and effect. Isn’t that exactly what all of science is about?

Of course, but you cannot see this noble aspiration in scientific equations. The language of algebra is symmetric: If X tells us about Y, then Y tells us about X. I’m talking about deterministic relationships. There’s no way to write in mathematics a simple fact — for example, that the upcoming storm causes the barometer to go down, and not the other way around.

Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Y causes X. It sounds like a terrible thing to say against science, I know. If I were to say it to my mother, she’d slap me.

But science is more forgiving: Seeing that we lack a calculus for asymmetrical relations, science encourages us to create one. And this is where mathematics comes in. It turned out to be a great thrill for me to see that a simple calculus of causation solves problems that the greatest statisticians of our time deemed to be ill-defined or unsolvable. And all this with the ease and fun of finding a proof in high-school geometry.

You made your name in AI a few decades ago by teaching machines how to reason probabilistically. Explain what was going on in AI at the time.

The problems that emerged in the early 1980s were of a predictive or diagnostic nature. A doctor looks at a bunch of symptoms from a patient and wants to come up with the probability that the patient has malaria or some other disease. We wanted automatic systems, expert systems, to be able to replace the professional — whether a doctor, or an explorer for minerals, or some other kind of paid expert. So at that point I came up with the idea of doing it probabilistically.

Unfortunately, standard probability calculations required exponential space and exponential time. I came up with a scheme called Bayesian networks that required polynomial time and was also quite transparent.

Yet in your new book you describe yourself as an apostate in the AI community today. In what sense?

In the sense that as soon as we developed tools that enabled machines to reason with uncertainty, I left the arena to pursue a more challenging task: reasoning with cause and effect. Many of my AI colleagues are still occupied with uncertainty. There are circles of research that continue to work on diagnosis without worrying about the causal aspects of the problem. All they want is to predict well and to diagnose well.

I can give you an example. All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.

I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.

People are excited about the possibilities for AI. You’re not?

As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

The way you talk about curve fitting, it sounds like you’re not very impressed with machine learning.

No, I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step. We also want to conduct some communication with a machine that is meaningful, and meaningful means matching our intuition. If you deprive the robot of your intuition about cause and effect, you’re never going to communicate meaningfully. Robots could not say “I should have done better,” as you and I do. And we thus lose an important channel of communication.

What are the prospects for having machines that share our intuition about cause and effect?

We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.

The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

Robots, too, will communicate with each other and will translate this hypothetical world, this wild world, of metaphorical models. 

When you share these ideas with people working in AI today, how do they react?

AI is currently split. First, there are those who are intoxicated by the success of machine learning and deep learning and neural nets. They don’t understand what I’m talking about. They want to continue to fit curves. But when you talk to people who have done any work in AI outside statistical learning, they get it immediately. I have read several papers written in the past two months about the limitations of machine learning.

Are you suggesting there’s a trend developing away from machine learning?

Not a trend, but a serious soul-searching effort that involves asking: Where are we going? What’s the next step?

That was the last thing I wanted to ask you.

I’m glad you didn’t ask me about free will.

In that case, what do you think about free will?

We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable.

In what way?

You have the sensation of free will; evolution has equipped us with this sensation. Evidently, it serves some computational function.

Will it be obvious when robots have free will?

I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t. So the first sign will be communication; the next will be better soccer.

Now that you’ve brought up free will, I guess I should ask you about the capacity for evil, which we generally think of as being contingent upon an ability to make choices. What is evil?

It’s the belief that your greed or grievance supersedes all standard norms of society. For example, a person has something akin to a software module that says “You are hungry, therefore you have permission to act to satisfy your greed or grievance.” But you have other software modules that instruct you to follow the standard laws of society. One of them is called compassion. When you elevate your grievance above those universal norms of society, that’s evil.

So how will we know when AI is capable of committing evil?

When it is obvious for us that there are software components that the robot ignores, consistently ignores. When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.

11 May 17:03

Vaccines Are Pushing Pathogens to Evolve

by Melinda Wenner Moyer
Dgfitch

Evo-bio and viruses, sitting in a tree

Andrew Read became a scientist so he could spend more time in nature, but he never imagined that would mean a commercial chicken farm. Read, a disease ecologist who directs the Pennsylvania State University Center for Infectious Disease Dynamics, and his research assistant Chris Cairns meandered their way through a hot, humid, pungent-smelling barn teeming with 30,000 young broiler chickens deep in the Pennsylvania countryside. Covered head to toe in white coveralls, the two men periodically stopped and crouched, collecting dust from the ground with gloved hands. Birds squawked and scuttered away. The men transferred the dust into small plastic tubes, which they capped and placed in plastic bags to bring back to the laboratory. “Funny where science leads you,” Read said.

Read and his colleagues are studying how the herpesvirus that causes Marek’s disease — a highly contagious, paralyzing and ultimately deadly ailment that costs the chicken industry more than $2 billion a year — might be evolving in response to its vaccine. Its latest vaccine, that is. Marek’s disease has been sickening chickens globally for over a century; birds catch it by inhaling dust laden with viral particles shed in other birds’ feathers. The first vaccine was introduced in 1970, when the disease was killing entire flocks. It worked well, but within a decade, the vaccine mysteriously began to fail; outbreaks of Marek’s began erupting in flocks of inoculated chickens. A second vaccine was licensed in 1983 in the hopes of solving the problem, yet it, too, gradually stopped working. Today, the poultry industry is on its third vaccine. It still works, but Read and others are concerned it might one day fail, too — and no fourth-line vaccine is waiting. Worse, in recent decades, the virus has become more deadly.

Read and others, including researchers at the U.S. Department of Agriculture, posit that the virus that causes Marek’s has been changing over time in ways that helped it evade its previous vaccines. The big question is whether the vaccines directly incited these changes or the evolution happened, coincidentally, for other reasons, but Read is pretty sure the vaccines have played a role. In a 2015 paper in PLOS Biology, Read and his colleagues vaccinated 100 chickens, leaving 100 others unvaccinated. They then infected all the birds with strains of Marek’s that varied in how virulent — as in how dangerous and infectious — they were. The team found that, over the course of their lives, the unvaccinated birds shed far more of the least virulent strains into the environment, whereas the vaccinated birds shed far more of the most virulent strains. The findings suggest that the Marek’s vaccine encourages more dangerous viruses to proliferate. This increased virulence might then give the viruses the means to overcome birds’ vaccine-primed immune responses and sicken vaccinated flocks.

Most people have heard of antibiotic resistance. Vaccine resistance, not so much. That’s because drug resistance is a huge global problem that annually kills nearly 25,000 people in the United States and in Europe, and more than twice that many in India. Microbes resistant to vaccines, on the other hand, aren’t a major menace. Perhaps they never will be: Vaccine programs around the globe have been and continue to be immensely successful at preventing infections and saving lives.

Recent research suggests, however, that some pathogen populations are adapting in ways that help them survive in a vaccinated world, and that these changes come about in a variety of ways. Just as the mammal population exploded after dinosaurs went extinct because a big niche opened up for them, some microbes have swept in to take the place of competitors eliminated by vaccines.

Immunization is also making once-rare or nonexistent genetic variants of pathogens more prevalent, presumably because vaccine-primed antibodies can’t as easily recognize and attack shape-shifters that look different from vaccine strains. And vaccines being developed against some of the world’s wilier pathogens — malaria, HIV, anthrax — are based on strategies that could, according to evolutionary models and lab experiments, encourage pathogens to become even more dangerous.

Evolutionary biologists aren’t surprised that this is happening. A vaccine is a novel selection pressure placed on a pathogen, and if the vaccine does not eradicate its target completely, then the remaining pathogens with the greatest fitness — those able to survive, somehow, in an immunized world — will become more common. “If you don’t have these pathogens evolving in response to vaccines,” said Paul Ewald, an evolutionary biologist at the University of Louisville, “then we really don’t understand natural selection.”

Yet don’t mistake these findings as evidence that vaccines are dangerous or that they are bound to fail — because undesirable outcomes can be thwarted by using our knowledge of natural selection, too. Evolution might be inevitable, but it can be coaxed in the right direction.

Quick-Change Artists

Vaccine science is brow-furrowingly complicated, but the underlying mechanism is simple. A vaccine exposes your body to either live but weakened or killed pathogens, or even just to certain bits of them. This exposure incites your immune system to create armies of immune cells, some of which secrete antibody proteins to recognize and fight off the pathogens if they ever invade again.

That said, many vaccines don’t provide lifelong immunity, for a variety of reasons. A new flu vaccine is developed every year because influenza viruses naturally mutate quickly. Vaccine-induced immunity can also wane over time. After being inoculated with the shot for typhoid, for instance, a person’s levels of protective antibodies drop over several years, which is why public health agencies recommend regular boosters for those living in or visiting regions where typhoid is endemic. Research suggests a similar drop in protection over time occurs with the mumps vaccine, too.

Vaccine failures caused by vaccine-induced evolution are different. These drops in vaccine effectiveness are incited by changes in pathogen populations that the vaccines themselves directly cause. Scientists have recently started studying the phenomenon in part because they finally can: Advances in genetic sequencing have made it easier to see how microbes change over time. And many such findings have reinforced just how quickly pathogens mutate and evolve in response to environmental cues.

Viruses and bacteria change quickly in part because they replicate like mad. Three days after a bird is bitten by a mosquito carrying West Nile virus, one milliliter of its blood contains 100 billion viral particles, roughly the number of stars in the Milky Way. And with each replication comes the opportunity for genetic change. When an RNA virus replicates, the copying process generates one new error, or mutation, per 10,000 nucleotides, a mutation rate as much as 100,000 times greater than that found in human DNA. Viruses and bacteria also recombine, or share genetic material, with similar strains, giving them another way to change their genomes rapidly. Just as people — with the exception of identical twins — all have distinctive genomes, pathogen populations tend to be composed of myriad genetic variants, some of which fare better than others during battles with vaccine-trained antibodies. The victors seed the pathogen population of the future.

The bacteria that cause pertussis, better known as whooping cough, illustrate how this can happen. In 1992, recommendations from the U.S. Centers for Disease Control and Prevention (CDC) began promoting a new vaccine to prevent the infection, which is caused by bacteria called Bordetella pertussis. The old vaccine was made using whole killed bacteria, which incited an effective immune response but also caused rare side effects, such as seizures. The new version, known as the “acellular” vaccine, contained just two to five outer membrane proteins isolated from the pathogen.

The unwanted side effects disappeared but were replaced by new, unexpected problems. First, for unclear reasons, protection conferred by the acellular vaccine waned over time. Epidemics began to erupt around the world. In 2001, scientists in the Netherlands proposed an additional reason for the resurgence: Perhaps vaccination was inciting evolution, causing strains of the bacteria that lacked the targeted proteins, or had different versions of them, to survive preferentially.

Studies have since backed up this idea. In a 2014 paper published in Emerging Infectious Diseases, researchers in Australia, led by the medical microbiologist Ruiting Lan at the University of New South Wales, collected and sequenced B. pertussis samples from 320 patients between 2008 and 2012. The percentage of bacteria that did not express pertactin, a protein targeted by the acellular vaccine, leapt from 5 percent in 2008 to 78 percent in 2012, which suggests that selection pressure from the vaccine was enabling pertactin-free strains to become more common. In the U.S., nearly all circulating viruses lack pertactin, according to a 2017 CDC paper. “I think pretty much everyone agrees pertussis strain variation is shaped by vaccination,” Lan said.

Hepatitis B, a virus that causes liver damage, tells a similar story. The current vaccine, which principally targets a portion of the virus known as the hepatitis B surface antigen, was introduced in the U.S. in 1989. A year later, in a paper published in the Lancet, researchers described odd results from a vaccine trial in Italy. They had detected circulating hepatitis B viruses in 44 vaccinated subjects, but in some of them, the virus was missing part of that targeted antigen. Then, in a series of studies conducted in Taiwan, researchers sequenced the viruses that infected children who had tested positive for hepatitis B. They reported that the prevalence of these viral “escape mutants,” as they called them, that lacked the surface antigen had increased from 7.8 percent in 1984 to 23.1 percent in 1999.

Some research suggests, however, that these mutant strains aren’t stable and that they may not pose much of a risk. Indeed, fewer and fewer people catch hepatitis B every year worldwide. As physicians at the Icahn School of Medicine at Mount Sinai in New York summarized in a 2016 paper, “the clinical significance of hepatitis B surface antigen escape mutations remains controversial.”

Empty Niche

Scientists usually have to design their own experiments. But in 2000 or so, it dawned on Bill Hanage that society was designing one for him. Hanage, who had just completed his Ph.D. in pathology, had always been fascinated by bacteria and evolutionary biology. And something evolutionarily profound was about to happen to bacteria in America.

A new vaccine called Prevnar 7 was soon to be recommended for all U.S. children to prevent infections caused by Streptococcus pneumoniae, bacteria responsible for many cases of pneumonia, ear infections, meningitis and other illnesses among the elderly and young children. To date, scientists have discovered more than 90 distinct S. pneumoniae serotypes — groups that share distinctive immunological features on their cell surface — and Prevnar 7 targeted the seven serotypes that caused the brunt of serious infections. But Hanage, along with researchers, wondered what was going to happen to the more than 80 others.  “It struck me, with my almost complete lack of formal training in evolutionary biology, that this was an extraordinary evolutionary experiment,” he said.

Hanage teamed up with Marc Lipsitch, an epidemiologist and microbiologist who had recently left Emory University for Harvard, and together the scientists — now both at Harvard — have been watching the pneumococcal population adapt to this new selection pressure. They and others have reported that while Prevnar 7 almost completely eliminated infections with the seven targeted serotypes, the other, rarer serotypes quickly swept in to take their place, including a serotype called 19A, which began causing a large proportion of serious pneumococcal infections. In response, in 2010, the U.S. introduced a new vaccine, Prevnar 13, which targets 19A and five additional serotypes. Previously unseen serotypes have again flourished in response. A 2017 paper in Pediatrics compared the situation to a high-stakes game of whack-a-mole. In essence, vaccination has completely restructured the pathogen population, twice.

Overall, the incidence of invasive pneumococcal infections in the U.S. has dropped dramatically among children and adults as a result of Prevnar 13. It is saving many American lives, presumably because it targets the subset of serotypes most likely to cause infections. But data from England and Wales are not so rosy. Although infections in kids there have dropped, invasive pneumococcal infections have been steadily increasing in older adults and are much higher now than they were before Prevnar 7 was introduced. As for why this is happening, “I don’t think we know,” Hanage said. “But I do think that we might somewhat reasonably suggest that the serotypes that are now being carried by children are inadvertently better able to cause disease in adults, which is something we would not have known before, because they were comparatively rare.”

One can think about vaccination as a kind of sieve, argues Troy Day, a mathematical evolutionary biologist at Queen’s University in Ontario, Canada. This sieve prevents many pathogens from passing through and surviving, but if a few squeeze by, those in that nonrandom sample will preferentially survive, replicate and ultimately shift the composition of the pathogen population. The ones squeezing through might be escape mutants with genetic differences that allow them to shrug off or hide from vaccine-primed antibodies, or they may simply be serotypes that weren’t targeted by the vaccine in the first place, like lucky criminals whose drug dens were overlooked during a night of citywide police raids. Either way, the vaccine quietly alters the genetic profile of the pathogen population.

Tipping the Scales

Just as pathogens have different ways of infecting and affecting us, the vaccines that scientists develop employ different immunological strategies. Most of the vaccines we get in childhood prevent pathogens from replicating inside us and thereby also prevent us from transmitting the infections to others. But scientists have so far been unable to make these kinds of sterilizing vaccines for complicated pathogens like HIV, anthrax and malaria. To conquer these diseases, some researchers have been developing immunizations that prevent disease without actually preventing infections — what are called “leaky” vaccines. And these new vaccines may incite a different, and potentially scarier, kind of microbial evolution.

Virulence, as a trait, is directly related to replication: The more pathogens that a person’s body houses, the sicker that person generally becomes. A high replication rate has evolutionary advantages — more microbes in the body lead to more microbes in snot or blood or stool, which gives the microbes more chances to infect others — but it also has costs, as it can kill hosts before they have the chance to pass on their infection. The problem with leaky vaccines, Read says, is that they enable pathogens to replicate unchecked while also protecting hosts from illness and death, thereby removing the costs associated with increased virulence. Over time, then, in a world of leaky vaccinations, a pathogen might evolve to become deadlier to unvaccinated hosts because it can reap the benefits of virulence without the costs — much as Marek’s disease has slowly become more lethal to unvaccinated chickens. This virulence can also cause the vaccine to start failing by causing illness in vaccinated hosts.

In addition to Marek’s disease, Read has been studying malaria, which is the target of several leaky vaccines currently in development. In a 2012 paper published in PLOS Biology, Read and Vicki Barclay, his postdoc at the time, inoculated mice with a component of several leaky malaria vaccines currently being tested in clinical trials. They then used these infected-but-not-sick mice to infect other vaccinated mice. After the parasites circulated through 21 rounds of vaccinated mice, Barclay and Read studied them and compared them to malaria parasites that had circulated through 21 rounds of unvaccinated mice. The strains from the vaccinated mice, they found, had grown far more virulent, in that they replicated faster and killed more red blood cells. At the end of 21 rounds of infection, these more quickly growing, deadly parasites were the only ones left.

Evolutionary Engineering

If this all sounds terribly scary, keep a few things in mind. Many pathogens, including measles, do not seem to be evolving as a population in response to their vaccines. Second, experimental data from a lab, such as the malaria study described above, don’t necessarily predict what will happen in the much more complex landscape of the real world. And third, researchers concerned with vaccine-driven evolution stress that the phenomenon is not in any way an argument against vaccination or its value; it’s just a consequence that needs to be considered, and one that can potentially be avoided. By thinking through how a pathogen population might respond to a vaccine, scientists can potentially make tweaks before it happens. They might even be able to design vaccines that encourage pathogens to become less dangerous over time.

In March 2017, Read and his Penn State colleague David Kennedy published a paper in the Proceedings of the Royal Society B in which they outlined several strategies that vaccine developers could use to ensure that future vaccines don’t get punked by evolutionary forces. One overarching recommendation is that vaccines should induce immune responses against multiple targets. A number of successful, seemingly evolution-proof vaccines already work this way: After people get inoculated with a tetanus shot, for example, their blood contains 100 types of unique antibodies, all of which fight the bacteria in different ways. In such a situation, it becomes much harder for a pathogen to accumulate all the changes needed to survive. It also helps if vaccines target all the known subpopulations of a particular pathogen, not just the most common or dangerous ones. Richard Malley and other researchers at Boston Children’s Hospital are, for instance, trying to develop a universal pneumococcal vaccine that is not serotype-specific.

Vaccines should also bar pathogens from replicating and transmitting inside inoculated hosts. One of the reasons that vaccine resistance is less of a problem than antibiotic resistance, Read and Kennedy posit, is that antibiotics tend to be given after an infection has already taken hold — when the pathogen population inside the host is already large and genetically diverse and might include mutants that can resist the drug’s effects. Most vaccines, on the other hand, are administered before infection and limit replication, which minimizes evolutionary opportunities.

But the most crucial need right now is for vaccine scientists to recognize the relevance of evolutionary biology to their field. Last month, when more than 1,000 vaccine scientists gathered in Washington, D.C., at the World Vaccine Congress, the issue of vaccine-induced evolution was not the focus of any scientific sessions. Part of the problem, Read says, is that researchers are afraid: They’re nervous to talk about and call attention to potential evolutionary effects because they fear that doing so might fuel more fear and distrust of vaccines by the public — even though the goal is, of course, to ensure long-term vaccine success. Still, he and Kennedy feel researchers are starting to recognize the need to include evolution in the conversation. “I think the scientific community is becoming increasingly aware that vaccine resistance is a real risk,” Kennedy said.

“I think so too,” Read agreed, “but there is a long way to go.”

Correction: On May 10, the caption for the second photograph was updated: It originally misnamed Chris Cairns as “Chris Gaines.”

08 May 15:23

Why Can’t Prisoners Vote?

by Nathan J. Robinson
Dgfitch

duh

In the United States, giving prisoners the right to vote is not an especially popular idea. Less than one-third of Americans support it, and only Vermont and Maine allow voting by the incarcerated. The Democratic Party does not advocate it; in their 2016 platform, which was widely considered very progressive, the party promised to “restore voting rights for those who have served their sentences” but did not comment on those who are still serving their sentences.

But why should letting prisoners vote be such a fringe position? In a new report from the People’s Policy Project, entitled “Full Human Beings,” Emmett Sanders argues that universal enfranchisement should mean exactly that: If prisoners are still full human beings, then they cannot rightfully be excluded from the democratic process. No matter how much people might instinctually feel that someone has “sacrificed their rights through their conduct,” we shouldn’t strip the basic elements of citizenship from someone merely because they have committed a wrong.

Many other countries recognize this. The Universal Declaration of Human Rights states that “everyone has a right to take part in the government of his [sic] country” and the International Covenant on Civil and Political Rights provides for “universal and equal suffrage.” In many European countries prisoners can vote, and the European Court of Human Rights repeatedly condemned the United Kingdom’s ban on prisoner voting. (The U.K. government ignored the ruling for years before settling on a weak compromise.)

The argument against prisoner voting is simple (one might even say simple-minded). Sanders quotes an opponent: “If you won’t follow the law yourself, then you can’t make the law for everyone else, which is what you do – directly or indirectly – when you vote.” But as Sanders points out, in practice this means that stealing a TV remote means you’re not allowed to express your preference on war or tax policy or abortion. The argument rests on collapsing “the law” into a single entity: Because I violated law, I am no longer allowed to affect any law, even if the law I violated was relatively trivial and the law I’d like to oppose is, say, a repeal of the First Amendment. And, tedious as it may be to pull the Martin Luther King card, if we’re going to argue that “not following the law” is the criterion for disenfranchisement, well, civil disobedience would mean getting stripped of the basic rights of citizenship.

Sanders points out that the impact of America’s prisoner disenfranchisement policies is immense. This is, in part, because of our obscene number of prisoners:

That means that elections outcomes could very easily be swayed by the enfranchisement of prisoners. (Though obviously this is a good reason for the Republican Party to staunchly defend existing policy.) But it also affects the distribution of representation in other ways: Because the census counts prisoners as being resident in the location of the prison, rather than at their home address, population numbers are artificially inflated in rural areas and deflated in urban ones. In extreme enough quantities, this can affect how state voting districts are drawn, even though the populations being counted can’t actually vote.

The racial dimensions of this become extremely troubling. First, we know that because the 13th Amendment contains a loophole allowing people to be enslaved if they have been convicted of a crime, white supremacists gradually reasserted their power by aggressively policing black people in order to deprive them of their rights. There is a disturbing historical precedent for stripping rights away from those convicted of crimes, and Sanders cites an old Mississippi court ruling that even though the state was “Restrained by the federal constitution from discriminating against the negro race” it could still “discriminate against its characteristics, and the offenses to which its criminal members are prone.” You can’t take the vote away from black people, but you can take the vote away from criminals, and if the prisons just so happen to end up filled with black people, it is a purely innocuous coincidence.

The racial impacts go beyond the specific deprivation of those currently incarcerated. The political representation of black communities as a whole is reduced by the disenfranchisement of a large swath of their populations, and people who have never been given the chance to cultivate the habits of voting and civic participation are less likely to pass those onto their children. Everyone should be troubled by the specifically racial effects of this policy, given the historical context and the extreme numbers of people the U.S. has incarcerated and thereby deprived of a democratic voice.

It is disappointing that the Democratic Party has not stood up for universal suffrage. Obviously felons should have their rights restored upon completing their sentence. (One opponent says that since we restrict felons from having guns we can restrict them from having votes, but a gun is generally far more obviously dangerous than a vote, unless the vote happens to be for Donald Trump.) But the broader point is that there shouldn’t be moral character tests for voting, period. We live in a country where people do things wrong. When they do things wrong, they are punished. But they do not thereby become “non-people” who lose all of their basic rights and obligations. (Voting can be thought of as an obligation as well as a right, which makes it even stranger that prisoners are kept from doing it. Do people vote for pleasure?) The left should take a very simple position on suffrage: Universal means universal.

If you appreciate our work, please consider making a donation or purchasing a subscription. Current Affairs is not for profit and carries no outside advertising. We are an independent media institution funded entirely by subscribers and small donors, and we depend on you in order to continue to produce high-quality work.

 

01 May 18:41

Troubled Times for Alternatives to Einstein’s Theory of Gravity

by Katia Moskvitch
Dgfitch

physics is cool

Miguel Zumalacárregui knows what it feels like when theories die. In September 2017, he was at the Institute for Theoretical Physics in Saclay, near Paris, to speak at a meeting about dark energy and modified gravity. The official news had not yet broken about an epochal astronomical measurement — the detection, by gravitational wave detectors as well as many other telescopes, of a collision between two neutron stars — but a controversial tweet had lit a firestorm of rumor in the astronomical community, and excited researchers were discussing the discovery in hushed tones.

Zumalacárregui, a theoretical physicist at the Berkeley Center for Cosmological Physics, had been studying how the discovery of a neutron-star collision would affect so-called “alternative” theories of gravity. These theories attempt to overcome what many researchers consider to be two enormous problems with our understanding of the universe. Observations going back decades have shown that the universe appears to be filled with unseen particles — dark matter — as well as an anti-gravitational force called dark energy. Alternative theories of gravity attempt to eliminate the need for these phantasms by modifying the force of gravity in such a way that it properly describes all known observations — no dark stuff required.

At the meeting, Zumalacárregui joked to his audience about the perils of combining science and Twitter, and then explained what the consequences would be if the rumors were true. Many researchers knew that the merger would be a big deal, but a lot of them simply “hadn’t understood their theories were on the brink of demise,” he later wrote in an email. In Saclay, he read them the last rites. “That conference was like a funeral where we were breaking the news to some attendees.”

The neutron-star collision was just the beginning. New data in the months since that discovery have made life increasingly difficult for the proponents of many of the modified-gravity theories that remain. Astronomers have analyzed extreme astronomical systems that contain spinning neutron stars, or pulsars, to look for discrepancies between their motion and the predictions of general relativity — discrepancies that some theories of alternative gravity anticipate. These pulsar systems let astronomers probe gravity on a new scale and with new precision. And with each new observation, these alternative theories of gravity are having an increasingly hard time solving the problems they were invented for. Researchers “have to sweat some more trying to get new physics,” said Anne Archibald, an astrophysicist at the University of Amsterdam.

Searching for Vulcan

Confounding observations have a way of leading astronomers to desperate explanations. On the afternoon of March 26, 1859, Edmond Lescarbault, a young doctor and amateur astronomer in Orgères-en-Beauce, a small village south of Paris, had a break between patients. He rushed to a tiny homemade observatory on the roof of his stone barn. With the help of his telescope, he spotted an unknown round object moving across the face of the sun.

He quickly sent news of this discovery to Urbain Le Verrier, the world’s leading astronomer at the time. Le Verrier had been trying to account for an oddity in the movement of the planet Mercury. All other planets orbit the sun in perfect accord with Isaac Newton’s laws of motion and gravitation, but Mercury appeared to advance a tiny amount with each orbit, a phenomenon known as perihelion precession. Le Verrier was certain that there had to be an invisible “dark” planet tugging on Mercury. Lescarbault’s observation of a dark spot transiting the sun appeared to show that the planet, which Le Verrier named Vulcan, was real.

It was not. Lescarbault’s sightings were never confirmed, and the perihelion precession of Mercury remained a puzzle for nearly six more decades. Then Einstein developed his theory of general relativity, which straightforwardly predicted that Mercury should behave the way it does.

In Le Verrier’s impulse to explain puzzling observations by introducing a heretofore hidden object, some modern-day researchers see parallels to the story of dark matter and dark energy. For decades, astronomers have noticed that the behavior of galaxies and galaxy clusters doesn’t seem to fit the predictions of general relativity. Dark matter is one way to explain that behavior. Likewise, the accelerating expansion of the universe can be thought of as being powered by a dark energy.

All attempts to directly detect dark matter and dark energy have failed, however. That fact “kind of leaves a bad taste in some people’s mouths, almost like the fictional planet Vulcan,” said Leo Stein, a theoretical physicist at the California Institute of Technology. “Maybe we’re going about it all wrong?”

For any alternative theory of gravity to work, it has to not only do away with dark matter and dark energy, but also reproduce the predictions of general relativity in all the standard contexts. “The business of alternative gravity theories is a messy one,” Archibald said. Some would-be replacements for general relativity, like string theory and loop quantum gravity, don’t offer testable predictions. Others “make predictions that are spectacularly wrong, so the theorists have to devise some kind of a screening mechanism to hide the wrong prediction on scales we can actually test,” she said.

The best-known alternative gravity theories are known as modified Newtonian dynamics, commonly abbreviated to MOND. MOND-type theories attempt to do away with dark matter by tweaking our definition of gravity. Astronomers have long observed that the gravitational force due to ordinary matter doesn’t appear to be sufficient to keep rapidly moving stars inside their galaxies. The gravitational pull of dark matter is assumed to make up the difference. But according to MOND, there are simply two kinds of gravity. In regions where the force of gravity is strong, bodies obey Newton’s law of gravity, which states that the gravitational force between two objects decreases in proportion to the square of the distance that separates them. But in environments of extremely weak gravity — like the outer parts of a galaxy — MOND suggests that another type of gravity is in play. This gravity decreases more slowly with distance, which means that it doesn’t weaken as much. “The idea is to make gravity stronger when it should be weaker, like at the outskirts of a galaxy,” Zumalacárregui said.

Then there is TeVeS (tensor-vector-scalar), MOND’s relativistic cousin. While MOND is a modification of Newtonian gravity, TeVeS is an attempt to take the general idea of MOND and make it into a full mathematical theory that can be applied to the universe as a whole — not just to relatively small objects like solar systems and galaxies. It also explains the rotation curves of galaxies by making gravity stronger on their outskirts. But TeVeS does so by augmenting gravity with “scalar” and “vector” fields that “essentially amplify gravity,” said Fabian Schmidt, a cosmologist at the Max Planck Institute for Astrophysics in Garching, Germany. A scalar field is like the temperature throughout the atmosphere: At every point it has a numerical value but no direction. A vector field, by contrast, is like the wind: It has both a value (the wind speed) and a direction.

There are also so-called Galileon theories — part of a class of theories called Horndeski and beyond-Horndeski — which attempt to get rid of dark energy. These modifications of general relativity also introduce a scalar field. There are many of these theories (Brans-Dicke theory, dilaton theories, chameleon theories and quintessence are just some of them), and their predictions vary wildly among models. But they all change the expansion of the universe and tweak the force of gravity. Horndeski theory was first put forward by Gregory Horndeski in 1974, but the wider physics community took note of it only around 2010. By then, Zumalacárregui said, “Gregory Horndeski quit science and a painter in New Mexico.”

There are also stand-alone theories, like that of physicist Erik Verlinde. According to his theory, the laws of gravity arise naturally from the laws of thermodynamics just like “the way waves emerge from the molecules of water in the ocean,” Zumalacárregui said. Verlinde wrote in an email that his ideas are not an “alternative theory” of gravity, but “the next theory of gravity that contains and transcends Einstein’s general relativity.” But he is still developing his ideas. “My impression is that the theory is still not sufficiently worked out to permit the kind of precision tests we carry out,” Archibald said. It’s built on “fancy words,” Zumalacárregui said, “but no mathematical framework to compute predictions and do solid tests.”

The predictions made by other theories differ in some way from those of general relativity. Yet these differences can be subtle, which makes them incredibly difficult to find.

Consider the neutron-star merger. At the same time that the Laser Interferometer Gravitational-Wave Observatory (LIGO) spotted the gravitational waves emanating from the event, the space-based Fermi satellite spotted a gamma ray burst from the same location. The two signals had traveled across the universe for 130 million years before arriving at Earth just 1.7 seconds apart.

These nearly simultaneous observations “brutally and pitilessly murdered” TeVeS theories, said Paulo Freire, an astrophysicist at the Max Planck Institute for Radio Astronomy in Bonn, Germany. “Gravity and gravitational waves propagate at the speed of light, with extremely high precision — which is not at all what was predicted by those theories.”

The same fate overtook some Galileon theories that add an extra scalar field to explain the universe’s accelerated expansion. These also predict that gravitational waves propagate more slowly than light. The neutron-star merger killed those off too, Schmidt said.

Further limits come from new pulsar systems. In 2013, Archibald and her colleagues found an unusual triple system: a pulsar and a white dwarf that orbit one another, with a second white dwarf orbiting the pair. These three objects exist in a space smaller than Earth’s orbit around the sun. The tight setting, Archibald said, offers ideal conditions for testing a crucial aspect of general relativity called the strong equivalence principle, which states that very dense strong-gravity objects such as neutron stars or black holes “fall” in the same way when placed in a gravitational field. (On Earth, the more familiar weak equivalence principle states that, if we ignore air resistance, a feather and a brick will fall at the same rate.)

The triple system makes it possible to check whether the pulsar and the inner white dwarf fall exactly the same way in the gravity of the outer white dwarf. Alternative-gravity theories assume that the scalar field generated in the pulsar should bend space-time in a much more extreme way than the white dwarf does. The two wouldn’t fall in a similar manner, leading to a violation of the strong equivalence principle and, with it, general relativity.

Over the past five years, Archibald and her team have recorded 27,000 measurements of the pulsar’s position as it orbits the other two stars. While the project is still a work in progress, it looks as though the results will be in total agreement with Einstein, Archibald said. “We can say that the degree to which the pulsar behaves abnormally is at most a few parts in a million. For an object with such strong gravity to still follow Einstein’s predictions so well, if there is one of these scalar fields, it has to have a really tiny effect.”

The test, which should be published soon, will put the best constraints yet on a whole group of alternative gravity theories, she added. If a theory only works with some additional scalar field, then the field should change the behavior of the pulsar. “We have such sensitive tests of general relativity that they need to somehow hide the theory’s new behavior in the solar system and in pulsar systems like ours,” Archibald said.

The data from another pulsar system dubbed the double pulsar, meanwhile, was originally supposed to eliminate the TeVeS theories. Detected in 2003, the double pulsar was until recently the only binary neutron-star system where both neutron stars were pulsars. Freire and his colleagues have already confirmed that the double pulsar’s behavior is perfectly in line with general relativity. Right before LIGO’s October announcement of a neutron-star merger, the researchers were going to publish a paper that would kill off TeVeS. But LIGO did the job for them, Freire said. “We need not go through that anymore.”

Slippery Survivors

A few theories have survived the LIGO blow — and will probably survive the upcoming pulsar data, Zumalacárregui said. There are some Horndeski and beyond-Horndeski theories that do not change the speed of gravitational waves. Then there are so-called massive gravity theories. Ordinarily, physicists assume that the particle associated with the force of gravity — the graviton — has no mass. In these theories, the graviton has a very small but nonzero mass. The neutron-star merger puts tough limits on these theories, Zumalacárregui said, since a massive graviton would travel more slowly than light. But in some theories the mass is assumed to be extremely small, at least 20 orders of magnitude lower than the neutrino’s, which means that the graviton would still move at nearly the speed of light.

There are a few other less well-known survivors, some of which are important to keep exploring, Archibald said, as long as dark matter and dark energy remain elusive. “Dark energy might be our only observational clue pointing to a new and better theory of gravity — or it might be a mysterious fluid with strange properties, and nothing to do with gravity at all,” she said.

Still, killing off theories is simply how science is supposed to work, argue researchers who have been exploring alternative gravity theories. “This is what we do all the time, put forward a working hypothesis and test it,” said Enrico Barausse of the Astrophysics Institute of Paris, who has worked on MOND-like theories. “99.9 percent of the time you rule out the hypothesis; the remaining 0.1 percent of the time you win the Nobel Prize.”

Zumalacárregui, who has also worked on these theories, was “sad at first” when he realized that the neutron star merger detection had proven Galileon theories wrong, but ultimately “very relieved it happened sooner rather than later,” he said. LIGO had been just about to close down for 18 months to upgrade the detector. “If the event had been a bit later, I would still be working on a wrong theory.”

So what’s next for general relativity and modified-gravity theories? “That question keeps me up at night more than I’d like,” Zumalacárregui said. “The good news is that we have narrowed our scope by a lot, and we can try to understand the few survivors much better.”

­

Schmidt thinks it’s necessary to measure the laws of gravity on large scales as directly as possible, using ongoing and future large galaxy surveys. “For example, we can compare the effect of gravity on light bending as well as galaxy velocities, typically predicted to be different in modified-gravity theories,” he said. Researchers also hope that future telescopes such as the Square Kilometer Array will discover more pulsar systems and provide better accuracy in pulsar timing to further improve gravity tests. And a space-based replacement for LIGO called LISA will study gravitational waves with exquisite accuracy — if indeed it launches as planned in the mid-2030s. “If that does not see any deviations from general relativity, I don’t know what will,” said Barausse.

But many physicists agree that it will take a long time to get rid of most alternative gravity models. Theorists have dozens of alternative gravity theories that could potentially explain dark matter and dark energy, Freire said. Some of these theories can’t make testable predictions, Archibald said, and many “have a parameter, a ‘knob’ you can turn to make them pass any test you like,” she said. But at some point, said Nicolas Yunes, a physicist at Montana State University, “this gets silly and Occam’s razor wins.”

Still, “fundamentally we know that general relativity is wrong,” Stein said. “At the very core there must be some breakdown” at the quantum level. “Maybe we won’t see it from astronomical observations … but we owe it to ourselves, as empirical scientists, to check whether or not our mathematical models are working at these scales.”

This article was reprinted on Wired.com.

26 Mar 18:00

Learning hurts your brain

by John Timmer
Dgfitch

How do notes work in Old Reader? I hope my brain doesn't die finding out. :(

After publishing an especially challenging quantum mechanics article, it's not uncommon to hear some of our readers complain that their head hurts. Presumably, they mean that the article gave them a (metaphorical) headache. But it is actually possible that challenging your brain does a bit of physical damage to the nerve cells of the brain. Researchers are reporting that, following situations where the brain is active, you might find signs of DNA damage within the cells there. The damage is normally restored quickly, but they hypothesize that the inability to repair it quickly enough may underlie some neurological diseases.

This research clearly started out as an attempt to understand Alzheimer's disease. The authors were working with mice that were genetically modified to mimic some of the mutations associated with early-onset forms of the disease in humans. As part of their testing, the team (based at UCSF) looked for signs of DNA damage in the brains of these animals. They generally found that the indications of damage went up when the brains of mice were active—specifically, after they were given a new environment to explore.

That might seem interesting on its own, but the surprise came when they looked at their control mice, which weren't at elevated risk of brain disorders. These mice also showed signs of DNA damage (although at slightly lower levels than the Alzheimer's-prone mice).

Read 6 remaining paragraphs | Comments