Worth following the alignment newsletter if you are interested in where AI developers' minds are at.
The Wall Street Journal has an article today entitled “How To Argue With A Young Socialist.” As a young socialist, I was naturally curious to discover how I can be argued with. The writer, Crispin Sartwell, is not so much interested in giving socialism a fair hearing as in providing Wall Street Journal readers with “clinchers” that can be used if you “find yourself in debate with an energetic new democratic socialist, perhaps even around the family dinner table.” Sartwell’s article produces exactly one argument that you, the thoughtful capitalist reader, can use to embarrass your Sanders-voting, Medicare-for-All supporting niece or nephew. Presumably, since it is the only “clincher” Sartwell presents, he feels it is a knock-down argument. When we hear it, we’ll know we’ve been clinched.
Here is the argument: If socialists think Republicans are bad and shouldn’t have power, why do socialists want to give the government more power, when that power could fall into the hands of Republicans? Let me quote directly from Sartwell’s op-ed so that I don’t accidentally summarize it sarcastically and ungenerously:
If the U.S. were to follow the advocates of democratic socialism, it would involve increasing state control of the economy in many dimensions… Perhaps the government would guarantee things like universal employment at a living wage. In other words, socialism would dramatically increase the government’s power and resources while making Americans more dependent on it for goods and necessities… These vast powers and resources would be overseen by elected officials… [W]hat would it mean, you can ask your young interlocutor, if the U.S. were a democratic socialist country and all that power fell into [the hands of someone like Donald Trump]? … A government that feeds its citizens tells them what and whether to eat. it is possible that the U.S. might end up someday with a leader that the socialists find even more abhorrent than Mr. Trump. So why, you can ask your young friend, is he so eager to give people he may hate so much more power over his own life? … Why would people with this view be so eager to create the powers they believe likely to oppress them?… If the people who wield state power are no better or more trustworthy than anyone else, then the arc of history is liable to bend toward reaction or fascism or oppression. The more powers placed in the hands of government, the deeper this bend is likely to be.
Sartwell evidently thinks the question he asks will so flummox the poor young socialist that there will be nothing left to discuss. You’ll have shown the impossible contradiction in their worldview. Humiliated, they will be forced to rethink their childish position that “everyone should have the right to healthcare and a living wage.”
But I do not think the question gives as devastating a blow to democratic socialists as Sartwell thinks it does. In fact, I think it can be readily answered.
The challenge Sartwell raises is this: We want to expand government power, but in doing so we expand the powers that government has over us. Then when someone like Donald Trump is in charge of the state, we will have handed our enemies the stick with which to beat us.
This is not a trivial challenge. Anyone who thinks the government ought to have certain powers needs to contemplate the risk that those powers will fall into the wrong hands. But deciding whether that risk is worth the benefit depends on what kind of government power we are talking about. Many conservatives do what Sartwell does, and talk about “expanding the power of government” as if there are few important distinctions between different powers. This allows them to combine “the power to pay for universal free college” with “the power to indefinitely detain and torture people” under the same umbrella. Both, after all, constitute expansion of the power of government. But we democratic socialists are in favor of certain expansions and opposed to others, and we are intelligent enough to draw the distinction.
Realize that Sartwell’s argument could have been made in 1935 before the passage of the Social Security Act, or in 1965 before the introduction of Medicare. “You say you want the government to guarantee people an old-age pension,” our 1930s Sartwell would say to a young New Deal liberal. “But have you considered that by expanding government power you are creating a new institution that could fall into the hands of Republicans?” The answer then would be that “the power to guarantee old people an income” is a good power that should be expanded.
Ah, but what if that power falls into the wrong hands? Well, it has fallen into the wrong hands; there have been seven Republican presidents since the introduction of Social Security. And the main threat has actually been that they would take away the benefits, not that they would somehow use the Social Security Administration to oppress people. Likewise the food stamp program. Sartwell says that any government that “feeds its citizens” is telling them “what and whether to eat.” Leaving aside the fact that most citizens probably don’t need to be told “whether” to eat, it’s true that the government can put restrictions on how food vouchers can be used. (“Ban them from buying lobster!”) But the fact that Republicans are constantly trying to limit food stamps doesn’t mean it would be better off if SNAP benefits were taken away. Yes, theoretically, if the far-right controlled all three branches of government, they could use their power to restrict food stamps explicitly to white people. That risk does not mean that we should eliminate the subsidies that currently allow 42 million people to afford meals.
Sartwell does not actually give specific examples of the ways in which he thinks Republicans will use democratic socialist institutions for nefarious ends. When you start to get more specific than “handing powers to the state,” the fears seem overstated. We have had Medicare for 50 years and even under some truly awful presidents it has not posed a threat to the freedom of the citizenry. (Unless you’re one of those types who thinks taxation is literally slavery.) The main risk coming from the right is, once again, that this important guarantee will be taken away.
I do worry a lot about expanded government power. But not the power to create and manage a welfare state. I worry about powers of surveillance, policing, and imprisonment. Now, you might say that I am a fool, because the expanded power of the welfare state does give increased powers of surveillance. When the state knows people’s medical information, it has their most personal secrets. True! But the state’s powers of surveillance are so vast that I honestly don’t see expanded Medicare, or free college tuition, as the domain from which the main threats of abuse will come. I think we have much, much more to worry about from our gigantic military, the FBI, the CIA, and the NSA. Sartwell is right that the left should worry about what Republicans can do with whatever institutions we endorse. From the historical record, there is much more to fear on this front from our totally unaccountable national security state than from the Medicare bureaucracy. My kind of left—the ACLU kind—fights to put strict limits on the government powers that are easily susceptible to horrifying abuses, and roll back the expansion of the executive’s ability to, for instance, detain, torture, and kill people without trial.
A recognition of the risks of government power is also why democratic socialists so strongly emphasize the “democratic” part of their politics. We believe in having institutions that are accountable and that are controlled by the public rather than by autocrats. We’re not actually trying to have “the president” in charge of American healthcare, precisely because we recognize the potential for abuse when power is concentrated in the hands of a single person. More accountable representation is a big part of the democratic socialist agenda! This is actually a core reason why we have more confidence in government than in private corporations: at least with the government, in theory, you get to vote if you don’t like what it does! On the other hand, we cannot vote out Mark Zuckerberg if Facebook abuses its power. I agree with Sartwell that unaccountable institutions in which single individuals have huge amounts of power are very dangerous. That’s precisely why I don’t believe in such institutions: I am a democrat who wants to make sure that every program is responsive to popular needs and balances representation of the public will with protection of minority rights. Sartwell says we should fear a world in which the president “controls” all of the country’s hospitals and universities. Agreed! That’s why I think decision-making powers in socialized institutions should be decentralized as much as possible.
I have to say, Sartwell’s argument is extremely patronizing. Instead of dealing seriously with the agenda we put forth, Sartwell treats us like idiots who must have never given a moment’s thought to an extremely obvious question. Actually, for libertarian socialists, the question of what power should be held by whom is very important, and we think about it constantly. If this really is the “clincher,” a.k.a. the best argument they’ve got, then I am more confident than ever that the democratic socialists will win. I am grateful to the Wall Street Journal for giving me a renewed sense of hope.
If you appreciate our work, please consider making a donation, purchasing a subscription, or supporting our podcast on Patreon. Current Affairs is not for profit and carries no outside advertising. We are an independent media institution funded entirely by subscribers and small donors, and we depend on you in order to continue to produce high-quality work.
physics nerds ahoy
In the 1920s, decades before counterculture guru Timothy Leary made waves self-experimenting with LSD and other psychedelic drugs at Harvard University, a young perceptual psychologist named Heinrich Klüver used himself as a guinea pig in an ongoing study into visual hallucinations. One day in his laboratory at the University of Minnesota, he ingested a peyote button, the dried top of the cactus Lophophora williamsii, and carefully documented how his visual field changed under its influence. He noted recurring patterns that bore a striking resemblance to shapes commonly found in ancient cave drawings and in the paintings of Joan Miró, and he speculated that perhaps they were innate to human vision. He classified the patterns into four distinct types that he dubbed “form constants”: lattices (including checkerboards, honeycombs and triangles), tunnels, spirals and cobwebs.
Some 50 years later, Jack Cowan of the University of Chicago set out to reproduce those hallucinatory form constants mathematically, in the belief that they could provide clues to the brain’s circuitry. In a seminal 1979 paper, Cowan and his graduate student Bard Ermentrout reported that the electrical activity of neurons in the first layer of the visual cortex could be directly translated into the geometric shapes people typically see when under the influence of psychedelics. “The math of the way the cortex is wired, it produces only these kinds of patterns,” Cowan explained recently. In that sense, what we see when we hallucinate reflects the architecture of the brain’s neural network.
But no one could figure out precisely how the intrinsic circuitry of the brain’s visual cortex generates the patterns of activity that underlie the hallucinations.
An emerging hypothesis points to a variation of the mechanism that produces so-called “Turing patterns.” In a 1952 paper, the British mathematician and code-breaker Alan Turing proposed a mathematical mechanism for generating many of the repeating patterns commonly seen in biology — the stripes of tigers or zebra fish, for example, or a leopard’s spots. Scientists have known for some time that the classic Turing mechanism probably can’t occur in a system as noisy and complicated as the brain. But a collaborator of Cowan’s, the physicist Nigel Goldenfeld of the University of Illinois, Urbana-Champaign, has proposed a twist on the original idea that factors in noise. Experimental evidence reported in two recent papers has bolstered the theory that this “stochastic Turing mechanism” is behind the geometric form constants people see when they hallucinate.
Images we “see” are essentially the patterns of excited neurons in the visual cortex. Light reflecting off the objects in our field of view enters the eye and comes to a focus on the retina, which is lined with photoreceptor cells that convert that light into electrochemical signals. These signals travel to the brain and stimulate neurons in the visual cortex in patterns that, under normal circumstances, mimic the patterns of light reflecting off objects in your field of view. But sometimes patterns can arise spontaneously from the random firing of neurons in the cortex — internal background noise, as opposed to external stimuli — or when a psychoactive drug or other influencing factor disrupts normal brain function and boosts the random firing of neurons. This is believed to be what happens when we hallucinate.
But why do we see the particular shapes that Klüver so meticulously classified? The widely accepted explanation proposed by Cowan, Ermentrout and their collaborators is that these patterns result from how the visual field is represented in the first visual area of the visual cortex. “If you opened up someone’s head and looked at the activity of the nerve cells, you would not see an image of the world as through a lens,” said Peter Thomas, a collaborator of Cowan’s who is now at Case Western Reserve University. Instead, Thomas explained, the image undergoes a transformation of coordinates as it is mapped onto the cortex. If neuronal activity takes the form of alternating stripes of firing and non-firing neurons, you perceive different things depending on the stripes’ orientation. You see concentric rings if the stripes are oriented one way. You see rays or funnel shapes emanating from a central point — the proverbial light at the end of the tunnel common in near-death experiences — if the stripes are perpendicular to that. And you see spiral patterns if the stripes have a diagonal orientation.
But if geometric visual hallucinations like Klüver’s form constants are a direct consequence of neural activity in the visual cortex, the question is why this activity spontaneously occurs — and why, in that case, it doesn’t cause us to hallucinate all the time. The stochastic Turing mechanism potentially addresses both questions.
Alan Turing’s original paper suggested that patterns like spots result from the interactions between two chemicals spreading through a system. Instead of diffusing evenly like a gas in a room until the density is uniform throughout, the two chemicals diffuse at different rates, which causes them to form distinct patches with differing chemical compositions. One of the chemicals serves as an activator that expresses a unique characteristic, such as the pigmentation of a spot or stripe, while the other acts as an inhibitor, disrupting the activator’s expression. Imagine, for example, a field of dry grass dotted with grasshoppers. If you start a fire at several random points, with no moisture present, the entire field will burn. But if the heat from the flames causes the fleeing grasshoppers to sweat, and that sweat dampens the grass around them, you’ll be left with periodic spots of unburned grass throughout the otherwise charred field. This fanciful analogy, invented by the mathematical biologist James Murray, illustrates the classic Turing mechanism.
Turing acknowledged that this was a greatly simplified toy model for how actual patterns arise, and he never applied it to a real biological problem. But it offers a framework to build on. In the case of the brain, Cowan and Ermentrout pointed out in their 1979 paper that neurons can be described as activators or inhibitors. Activator neurons encourage nearby cells to also fire, amplifying electrical signals, while inhibitory neurons shut down their nearest neighbors, dampening signals. The researchers noticed that activator neurons in the visual cortex were mostly connected to nearby activator neurons, while inhibitory neurons tended to connect to inhibitory neurons farther away, forming a wider network. This is reminiscent of the two different chemical diffusion rates required in the classic Turing mechanism, and in theory, it could spontaneously give rise to stripes or spots of active neurons scattered throughout a sea of low neuronal activity. These stripes or spots, depending on their orientation, could be what generates perceptions of lattices, tunnels, spirals and cobwebs.
While Cowan recognized that there could be some kind of Turing mechanism at work in the visual cortex, his model didn’t account for noise — the random, bursty firing of neurons — which seemed likely to interfere with the formation of Turing patterns. Meanwhile, Goldenfeld and other researchers had been applying Turing’s ideas in ecology, as a model for predator-prey dynamics. In that scenario, the prey serve as activators, seeking to reproduce and increase their numbers, while predators serve as inhibitors, keeping the prey population in check with their kills. Thus, together they form Turing-like spatial patterns. Goldenfeld was studying how random fluctuations in predator and prey populations affect these patterns. He knew about Cowan’s work in neuroscience and soon realized his insights could apply there as well.
Houses With Eyes and Jaws
A condensed matter physicist by training, Goldenfeld gravitates toward interdisciplinary research, applying concepts and techniques from physics and math to biology and evolutionary ecology. Roughly 10 years ago, he and his then graduate student Tom Butler were pondering how the spatial distribution of predators and prey changes in response to random local fluctuations in their populations, for instance if a herd of sheep is attacked by wolves. Goldenfeld and Butler found that when a herd’s population is relatively low, random fluctuations can have big effects, even leading to extinction. It became clear that ecological models need to take random fluctuations into account rather than just describe the average behavior of populations. “Once I knew how to do the fluctuation calculation for pattern formation,” Goldenfeld said, “it was an obvious next step to apply this to the hallucination problem.”
In the brain, it’s the number of neurons that are on or off that randomly fluctuates rather than sheep and wolf populations. If an activator neuron randomly switches on, it can cause other nearby neurons to also switch on. Conversely, when an inhibitory neuron randomly switches on, it causes nearby neurons to switch off. Because the connections between inhibitory neurons are long-range, any inhibitory signals that randomly arise spread faster than random excitatory signals — exactly what’s needed for a Turing-like mechanism. Goldenfeld’s models suggested that stripes of active and inactive neurons will form in a Turing-like pattern. He dubbed these stochastic Turing patterns.
However, to function properly, the visual cortex must be primarily driven by external stimuli, not by its own internal noisy fluctuations. What keeps stochastic Turing patterns from constantly forming and causing us to constantly hallucinate? Goldenfeld and colleagues argue that even though the firing of neurons can be random, their connections are not. Whereas short-range connections between excitatory neurons are common, long-range connections between inhibitory neurons are sparse, and Goldenfeld thinks this helps suppress the spread of random signals. He and his cohorts tested this hypothesis by creating two separate neural network models. One was based on the actual wiring of the visual cortex, and the other was a generic network with random connections. In the generic model, normal visual function was substantially degraded because the random firing of neurons served to amplify the Turing effect. “A generically wired visual cortex would be contaminated by hallucinations,” Goldenfeld said. In the realistic model of the cortex, however, internal noise was effectively dampened.
Goldenfeld suggests that evolution has selected for a particular network structure that inhibits hallucinatory patterns: The sparseness of connections between inhibitory neurons prevents inhibitory signals from traveling long distances, disrupting the stochastic Turing mechanism and the perception of funnels, cobwebs, spirals and so forth. The dominant patterns that spread through the network will be based on external stimuli — a very good thing for survival, since you want to be able to spot a snake and not be distracted by a pretty spiral shape.
“If the cortex had been built with these long-range inhibitory connections all over the place, then the tendency to form these patterns would be stronger than the tendency to process the visual input coming in. It would be a disaster and we would never have survived,” Thomas said. Because long-range inhibitory connections are sparse, “the models don’t produce spontaneous patterns unless you force them to, by simulating the effects of hallucinogenic drugs.”
Experiments have shown that hallucinogens like LSD appear to disrupt the normal filtering mechanisms the brain employs, perhaps boosting long-range inhibitory connections and therefore permitting random signals to amplify in a stochastic Turing effect.
Goldenfeld and collaborators have not yet tested their theory of visual hallucinations experimentally, but hard evidence that stochastic Turing patterns do arise in biological systems has emerged in the last few years. Around 2010, Goldenfeld heard about work done by Ronald Weiss, a synthetic biologist at the Massachusetts Institute of Technology who had been struggling for years to find the appropriate theoretical framework to explain some intriguing experimental results.
Years earlier, Weiss and his team had grown bacterial biofilms that were genetically engineered to express one of two different signaling molecules. In an effort to demonstrate the growth of a classic Turing pattern, they tagged the signaling molecules with fluorescent markers so that the activators glowed red and the inhibitors glowed green. Although the experiment started out with a homogenous biofilm, over time a Turing-like pattern emerged, with red polka dots scattered throughout a swath of green. However, the red dots were much more haphazardly located than, say, leopards’ spots. Additional experiments also failed to yield the desired results.
When Goldenfeld heard about these experiments, he suspected that Weiss’ data could be viewed from a stochastic point of view. “Rather than trying to make the patterns more regular and less noisy,” Weiss said, “we realized through our collaboration with Nigel that these are really stochastic Turing patterns.” Weiss, Goldenfeld and collaborators finally published their paper in the Proceedings of the National Academy of Sciences last month, 17 years after the research began.
The biofilms formed stochastic Turing patterns because gene expression is a noisy process. According to Joel Stavans of the Weizmann Institute of Science in Israel, that noise is responsible for disparities among cells, which can have the same genetic information yet behave differently. In recently published work, Stavans and his colleagues investigated how noise in gene expression can lead to stochastic Turing patterns in cyanobacteria, ancient organisms that produce a large proportion of the oxygen on Earth. The researchers studied anabaena, a type of cyanobacteria with a simple structure of cells attached to one another in a long train. An anabaena’s cells can specialize to perform one of two activities: photosynthesis, or converting nitrogen in the atmosphere into proteins. An anabaena might have, for instance, one nitrogen-fixing cell, then 10 or 15 photosynthesis cells, then another nitrogen-fixing cell, and so on, in what appears to be a stochastic Turing pattern. The activator, in this case, is a protein that creates a positive feedback loop to produce more such proteins. At the same time, the protein may also produce other proteins that diffuse to neighboring cells and inhibit the first protein’s production. This is the primary feature of a Turing mechanism: an activator and an inhibitor fighting against each other. In anabaena, noise drives the competition.
Researchers say the fact that stochastic Turing processes appear to be at work in these two biological contexts adds plausibility to the theory that the same mechanism occurs in the visual cortex. The findings also demonstrate how noise plays a pivotal role in biological organisms. “There is not a direct correlation between how we program computers” and how biological systems work, Weiss said. “Biology requires different frameworks and design principles. Noise is one of them.”
There is still much more to understand about hallucinations. Jean-Paul Sartre experimented with mescaline in Paris in 1935 and found it distorted his visual perception for weeks. Houses appeared to have “leering faces, all eyes and jaws,” clock faces looked like owls, and he saw crabs following him around all the time. These are much higher-level hallucinations than Klüver’s simple form constants. “The early stages of visual hallucination are very simple — these geometric patterns,” Ermentrout said. But when higher cognitive functions kick in, such as memory, he said, “you start to see more complex hallucinations and you try and make sense of them. I believe that all you’re seeing is the spontaneous emergence of as the higher brain areas become more excited.”
Back in the ’20s, Klüver also worked with subjects who reported tactile hallucinations, such as cobwebs crawling across their skin. Ermentrout thinks this is consistent with a cobweb-like form constant mapped onto the somatosensory cortex. Similar processes might play out in the auditory cortex, which could account not only for auditory hallucinations but for phenomena like tinnitus. Cowan agrees, noting that the brain has similar wiring throughout, so if a theory of hallucinations “works for vision, it’s going to work for all the other senses.”
WELL, it only confirms that a photon would have to have a hidden variable with more than one bit of storage.
The theoretical physicist John Wheeler once used the phrase “great smoky dragon” to describe a particle of light going from a source to a photon counter. “The mouth of the dragon is sharp, where it bites the counter. The tail of the dragon is sharp, where the photon starts,” Wheeler wrote. The photon, in other words, has definite reality at the beginning and end. But its state in the middle — the dragon’s body — is nebulous. “What the dragon does or looks like in between we have no right to speak.”
Wheeler was espousing the view that elementary quantum phenomena are not real until observed, a philosophical position called anti-realism. He even designed an experiment to show that if you hold on to realism — in which quantum objects such as photons always have definite, intrinsic properties, a position that encapsulates a more classical view of reality — then one is forced to concede that the future can influence the past. Given the absurdity of backward time-travel, Wheeler’s experiment became an argument for anti-realism at the level of the quantum.
But in May, Rafael Chaves and colleagues at the International Institute of Physics in Natal, Brazil, found a loophole. They showed that Wheeler’s experiment, given certain assumptions, can be explained using a classical model that attributes to a photon an intrinsic nature. They gave the dragon a well-defined body, but one that is hidden from the mathematical formalism of standard quantum mechanics.
Chaves’s team then proposed a twist to Wheeler’s experiment to test the loophole. With unusual alacrity, three teams raced to do the modified experiment. Their results, reported in early June, have shown that a class of classical models that advocate realism cannot make sense of the results. Quantum mechanics may be weird, but it’s still, oddly, the simplest explanation around.
Wheeler devised his experiment in 1983 to highlight one of the dominant conceptual conundrums in quantum mechanics: wave-particle duality. Quantum objects seem to act either like particles or waves, but never both at the same time. This feature of quantum mechanics seems to imply that objects have no inherent reality until observed. “Physicists have had to grapple with wave-particle duality as an essential, strange feature of quantum theory for a century,” said David Kaiser, a physicist and historian of science at the Massachusetts Institute of Technology. “The idea pre-dates other quintessentially strange features of quantum theory, such as Heisenberg’s uncertainty principle and Schrödinger’s cat.”
The phenomenon is underscored by a special case of the famous double-slit experiment called the Mach-Zehnder interferometer.
In the experiment, a single photon is fired at a half-silvered mirror, or beam splitter. The photon is either reflected or transmitted with equal probability — and thus can take one of two paths. In this case, the photon will take either path 1 or path 2, and then go on to hit either detector D1 or D2 with equal probability. The photon acts like an indivisible whole, showing us its particle-like nature.
But there’s a twist. At the point where path 1 and path 2 cross, one can add a second beam splitter, which changes things. In this setup, quantum mechanics says that the photon seems to take both paths at once, as a wave would. The two waves come back together at the second beam splitter. The experiment can be set up so that the waves combine constructively — peak to peak, trough to trough — only when they move toward D1. The path toward D2, by contrast, represents destructive interference. In such a setup, the photon will always be found at D1 and never at D2. Here, the photon displays its wavelike nature.
Wheeler’s genius lay in asking: what if we delay the choice of whether to add the second beam splitter? Let’s assume the photon enters the interferometer without the second beam splitter in place. It should act like a particle. One can, however, add the second beam splitter at the very last nanosecond. Both theory and experiment show that the photon, which until then was presumably acting like a particle and would have gone to either D1 or D2, now acts like a wave and goes only to D1. To do so, it had to seemingly be in both paths simultaneously, not one path or the other. In the classical way of thinking, it’s as if the photon went back in time and changed its character from particle to wave.
One way to avoid such retro-causality is to deny the photon any intrinsic reality and argue that the photon becomes real only upon measurement. That way, there is nothing to undo.
Such anti-realism, which is often associated with the Copenhagen interpretation of quantum mechanics, took a theoretical knock with Chaves’s work, at least in the context of this experiment. His team wanted to explain counterintuitive aspects of quantum mechanics using a new set of ideas called causal modeling, which has grown in popularity in the past decade, advocated by computer scientist Judea Pearl and others. Causal modeling involves establishing cause-and-effect relationships between various elements of an experiment. Often when studying correlated events — call them A and B — if one cannot conclusively say that A causes B, or that B causes A, there exists a possibility that a previously unsuspected or “hidden” third event, C, causes both. In such cases, causal modeling can help uncover C.
Chaves and his colleagues Gabriela Lemos and Jacques Pienaar focused on Wheeler’s delayed choice experiment, fully expecting to fail at finding a model with a hidden process that both grants a photon intrinsic reality and also explains its behavior without having to invoke retro-causality. They thought they would prove that the delayed-choice experiment is “super counterintuitive, in the sense that there is no causal model that is able to explain it,” Chaves said.
But they were in for a surprise. The task proved relatively easy. They began by assuming that the photon, immediately after it has crossed the first beam splitter, has an intrinsic state denoted by a “hidden variable.” A hidden variable, in this context, is something that’s absent from standard quantum mechanics but that influences the photon’s behavior in some way. The experimenter then chooses to add or remove the second beam splitter. Causal modeling, which prohibits backward time travel, ensures that the experimenter’s choice cannot influence the past intrinsic state of the photon.
Given the hidden variable, which implies realism, the team then showed that it’s possible to write down rules that use the variable’s value and the presence or absence of the second beam splitter to guide the photon to D1 or D2 in a manner that mimics the predictions of quantum mechanics. Here was a classical, causal, realistic explanation. They had found a new loophole.
This surprised some physicists, said Tim Byrnes, a theoretical quantum physicist at New York University, Shanghai. “What people didn’t really appreciate is that this kind of experiment is susceptible to a classical version that perfectly mimics the experimental results,” Byrnes said. “You could construct a hidden variable theory that didn’t involve quantum mechanics.”
“This was the step zero,” Chaves said. The next step was to figure out how to modify Wheeler’s experiment in such a way that it could distinguish between this classical hidden variable theory and quantum mechanics.
In their modified thought experiment, the full Mach-Zehnder interferometer is intact; the second beam splitter is always present. Instead, two “phase shifts” — one near the beginning of the experiment, one toward the end — serve the role of experimental dials that the researcher can adjust at will.
The net effect of the two phase shifts is to change the relative lengths of the paths. This changes the interference pattern, and with it, the presumed “wavelike” or “particle-like” behavior of the photon. For example, the value of the first phase shift could be such that the photon acts like a particle inside the interferometer, but the second phase shift could force it to act like a wave. The researchers require that the second phase shift is set after the first.
With this setup in place, Chaves’s team came up with a way to distinguish between a classical causal model and quantum mechanics. Say the first phase shift can take one of three values, and the second one of two values. That makes six possible experimental settings in total. They calculated what they expected to see for each of these six settings. Here, the predictions of a classical hidden variable model and standard quantum mechanics differ. They then constructed a formula. The formula takes as its input probabilities calculated from the number of times that photons land on particular detectors (based on the setting of the two phase shifts). If the formula equals zero, the classical causal model can explain the statistics. But if the equation spits out a number greater than zero, then, subject to some constraints on the hidden variable, there’s no classical explanation for the experiment’s outcome.
Chaves teamed up with Fabio Sciarrino, a quantum physicist at the University of Rome La Sapienza, and his colleagues to test the inequality. Simultaneously, two teams in China — one led by Jian-Wei Pan, an experimental physicist at the University of Science and Technology of China (USTC) in Hefei, China, and another by Guang-Can Guo, also at USTC — carried out the experiment.
Each team implemented the scheme slightly differently. Guo’s group stuck to the basics, using an actual Mach-Zehnder interferometer. “It is the one that I would say is actually the closest to Wheeler’s original proposal,” said Howard Wiseman, a theoretical physicist at Griffith University in Brisbane, Australia, who was not part of any team.
But all three showed that the formula is greater than zero with irrefutable statistical significance. They ruled out the classical causal models of the kind that can explain Wheeler’s delayed-choice experiment. The loophole has been closed. “Our experiment has salvaged Wheeler’s famous thought experiment,” Pan said.
Hidden Variables That Remain
Kaiser is impressed by Chaves’s “elegant” theoretical work and the experiments that ensued. “The fact that each of the recent experiments has found clear violations of the new inequality … provides compelling evidence that ‘classical’ models of such systems really do not capture how the world works, even as quantum-mechanical predictions match the latest results beautifully,” he said.
The formula comes with certain assumptions. The biggest one is that the classical hidden variable used in the causal model can take one of two values, encoded in one bit of information. Chaves thinks this is reasonable, since the quantum system — the photon — can also only encode one bit of information. (It either goes in one arm of the interferometer or the other.) “It’s very natural to say that the hidden variable model should also have dimension two,” Chaves said.
But a hidden variable with additional information-carrying capacity can restore the classical causal model’s ability to explain the statistics observed in the modified delayed-choice experiment.
In addition, the most popular hidden variable theory remains unaffected by these experiments. The de Broglie-Bohm theory, a deterministic and realistic alternative to standard quantum mechanics, is perfectly capable of explaining the delayed-choice experiment. In this theory, particles always have positions (which are the hidden variables), and hence have objective reality, but they are guided by a wave. So reality is both wave and particle. The wave goes through both paths, the particle through one or the other. The presence or absence of the second beam splitter affects the wave, which then guides the particle to the detectors — with exactly the same results as standard quantum mechanics.
For Wiseman, the debate over Copenhagen versus de Broglie-Bohm in the context of the delayed-choice experiment is far from settled. “So in Copenhagen, there is no strange inversion of time precisely because we have no right to say anything about the photon’s past,” he wrote in an email. “In de Broglie-Bohm there is a reality independent of our knowledge, but there is no problem as there is no inversion — there is a unique causal (forward in time) description of everything.”
Kaiser, even as he lauds the efforts so far, wants to take things further. In current experiments, the choice of whether or not to add the second phase shift or the second beam splitter in the classic delayed-choice experiment was being made by a quantum random-number generator. But what’s being tested in these experiments is quantum mechanics itself, so there’s a whiff of circularity. “It would be helpful to check whether the experimental results remain consistent, even under complementary experimental designs that relied on entirely different sources of randomness,” Kaiser said.
To this end, Kaiser and his colleagues have built such a source of randomness using photons coming from distant quasars, some from more than halfway across the universe. The photons were collected with a one-meter telescope at the Table Mountain Observatory in California. If a photon had a wavelength less than a certain threshold value, the random number generator spit out a 0, otherwise a 1. In principle, this bit can be used to randomly choose the experimental settings. If the results continue to support Wheeler’s original argument, then “it gives us yet another reason to say that wave-particle duality is not going to be explained away by some classical physics explanation,” Kaiser said. “The range of conceptual alternatives to quantum mechanics has again been shrunk, been pushed back into a corner. That’s really what we are after.”
For now, the dragon’s body, which for a brief few weeks had come into focus, has gone back to being smoky and indistinct.
In 1963, Maria Goeppert Mayer won the Nobel Prize in physics for describing the layered, shell-like structures of atomic nuclei. No woman has won since.
One of the many women who, in a different world, might have won the physics prize in the intervening 55 years is Sau Lan Wu. Wu is the Enrico Fermi Distinguished Professor of Physics at the University of Wisconsin, Madison, and an experimentalist at CERN, the laboratory near Geneva that houses the Large Hadron Collider. Wu’s name appears on more than 1,000 papers in high-energy physics, and she has contributed to a half-dozen of the most important experiments in her field over the past 50 years. She has even realized the improbable goal she set for herself as a young researcher: to make at least three major discoveries.
Wu was an integral member of one of the two groups that observed the J/psi particle, which heralded the existence of a fourth kind of quark, now called the charm. The discovery, in 1974, was known as the November Revolution, a coup that led to the establishment of the Standard Model of particle physics. Later in the 1970s, Wu did much of the math and analysis to discern the three “jets” of energy flying away from particle collisions that signaled the existence of gluons — particles that mediate the strong force holding protons and neutrons together. This was the first observation of particles that communicate a force since scientists recognized photons of light as the carriers of electromagnetism. Wu later became one of the group leaders for the ATLAS experiment, one of the two collaborations at the Large Hadron Collider that discovered the Higgs boson in 2012, filling in the final piece of the Standard Model. She continues to search for new particles that would transcend the Standard Model and push physics forward.
Sau Lan Wu was born in occupied Hong Kong during World War II. Her mother was the sixth concubine to a wealthy businessman who abandoned them and her younger brother when Wu was a child. She grew up in abject poverty, sleeping alone in a space behind a rice shop. Her mother was illiterate, but she urged her daughter to pursue an education and become independent of volatile men.
Wu graduated from a government school in Hong Kong and applied to 50 universities in the United States. She received a scholarship to attend Vassar College and arrived with $40 to her name.
Although she originally intended to become an artist, she was inspired to study physics after reading a biography of Marie Curie. She worked on experiments during consecutive summers at Brookhaven National Laboratory on Long Island, and she attended graduate school at Harvard University. She was the only woman in her cohort and was barred from entering the male dormitories to join the study groups that met there. She has labored since then to make a space for everyone in physics, mentoring more than 60 men and women through their doctorates.
Quanta Magazine joined Sau Lan Wu on a gray couch in sunny Cleveland in early June. She had just delivered an invited lecture about the discovery of gluons at a symposium to honor the 50th birthday of the Standard Model. The interview has been condensed and edited for clarity.
You work on the largest experiments in the world, mentor dozens of students, and travel back and forth between Madison and Geneva. What is a normal day like for you?
Very tiring! In principle, I am full-time at CERN, but I do go to Madison fairly often. So I do travel a lot.
How do you manage it all?
Well, I think the key is that I am totally devoted. My husband, Tai Tsun Wu, is also a professor, in theoretical physics at Harvard. Right now, he’s working even harder than me, which is hard to imagine. He’s doing a calculation about the Higgs boson decay that is very difficult. But I encourage him to work hard, because it’s good for your mental state when you are older. That’s why I work so hard, too.
Of all the discoveries you were involved in, do you have a favorite?
Discovering the gluon was a fantastic time. I was just a second- or third-year assistant professor. And I was so happy. That’s because I was the baby, the youngest of all the key members of the collaboration.
The gluon was the first force-carrying particle discovered since the photon. The W and Z bosons, which carry the weak force, were discovered a few years later, and the researchers who found them won a Nobel Prize. Why was no prize awarded for the discovery of the gluon?
Well, you are going to have to ask the Nobel committee that. I can tell you what I think, though. Only three people can win a Nobel Prize. And there were three other physicists on the experiment with me who were more senior than I was. They treated me very well. But I pushed the idea of searching for the gluon right away, and I did the calculations. I didn’t even talk to theorists. Although I married a theorist, I never really paid attention to what the theorists told me to do.
How did you wind up being the one to do those calculations?
If you want to be successful, you have to be fast. But you also have to be first. So I did the calculations to make sure that as soon as a new collider at DESY turned on in Hamburg, we could see the gluon and recognize its signal of three jets of particles. We were not so sure in those days that the signal for the gluon would be clear-cut, because the concept of jets had only been introduced a couple of years earlier, but this seemed to be the only way to discover gluons.
You were also involved in discovering the Higgs boson, the particle in the Standard Model that gives many other particles their masses. How was that experiment different from the others that you were part of?
I worked a lot more and a lot longer to discover the Higgs than I have on anything else. I worked for over 30 years, doing one experiment after another. I think I contributed a lot to that discovery. But the ATLAS collaboration at CERN is so large that you can’t even talk about your individual contribution. There are 3,000 people who built and worked on our experiment. How can anyone claim anything? In the old days, life was easier.
Has it gotten any easier to be a woman in physics than when you started?
Not for me. But for younger women, yes. There is a trend among funding agencies and institutions to encourage younger women, which I think is great. But for someone like me it is harder. I went through a very difficult time. And now that I am established others say: Why should we treat you any differently?
Who were some of your mentors when you were a young researcher?
Bjørn Wiik really helped me when I was looking for the gluon at DESY.
Well, when I started at the University of Wisconsin, I was looking for a new project. I was interested in doing electron-positron collisions, which could give the clearest indication of a gluon. So I went to talk to another professor at Wisconsin who did these kinds of experiments at SLAC, the lab at Stanford. But he was not interested in working with me.
So I tried to join a project at the new electron-positron collider at DESY. I wanted to join the JADE experiment . I had some friends working there, so I went to Germany and I was all set to join them. But then I heard that no one had told a big professor in the group about me, so I called him up. He said, “I am not sure if I can take you, and I am going on vacation for a month. I’ll phone you when I get back.” I was really sad because I was already in Germany at DESY.
But then I ran into Bjørn Wiik, who led a different experiment called TASSO, and he said, “What are you doing here?” I said, “I tried to join JADE, but they turned me down.” He said, “Come and talk to me.” He accepted me the very next day. And the thing is, JADE later broke their chamber, and they could not have observed the three-jet signal for gluons when we observed it first at TASSO. So I have learned that if something does not work out for you in life, something else will.
You certainly turned that negative into a positive.
Yes. The same thing happened when I left Hong Kong to attend college in the U.S. I applied to 50 universities after I went through a catalog at the American consulate. I wrote in every application, “I need a full scholarship and room and board,” because I had no money. Four universities replied. Three of them turned me down. Vassar was the only American college that accepted me. And it turns out, it was the best college of all the ones I applied to.
If you persist, something good is bound to happen. My philosophy is that you have to work hard and have good judgment. But you also have to have luck.
I know this is an unfair question, because no one ever asks men, even though we should, but how can society inspire more women to study physics or consider it as a career?
Well, I can only say something about my field, experimental high-energy physics. I think my field is very hard for women. I think partially it’s the problem of family.
My husband and I did not live together for 10 years, except during the summers. And I gave up having children. When I was considering having children, it was around the time when I was up for tenure and a grant. I feared I would lose both if I got pregnant. I was less worried about actually having children than I was about walking into my department or a meeting while pregnant. So it’s very, very hard for families.
I think it still can be.
Yeah, but for the younger generation it’s different. Nowadays, a department looks good if it supports women. I don’t mean that departments are deliberately doing that only to look better, but they no longer actively fight against women. It’s still hard, though. Especially in experimental high-energy physics. I think there is so much traveling that it makes having a family or a life difficult. Theory is much easier.
You have done so much to help establish the Standard Model of particle physics. What do you like about it? What do you not like?
It’s just amazing that the Standard Model works as well as it does. I like that every time we try to search for something that is not accounted for in the Standard Model, we do not find it, because the Standard Model says we shouldn’t.
But back in my day, there was so much that we had yet to discover and establish. The problem now is that everything fits together so beautifully and the Model is so well confirmed. That’s why I miss the time of the J/psi discovery. Nobody expected that, and nobody really had a clue what it was.
But maybe those days of surprise aren’t over.
We know that the Standard Model is an incomplete description of nature. It doesn’t account for gravity, the masses of neutrinos, or dark matter — the invisible substance that seems to make up six-sevenths of the universe’s mass. Do you have a favorite idea for what lies beyond the Standard Model?
Well, right now I am searching for the particles that make up dark matter. The only thing is, I am committed to working at the Large Hadron Collider at CERN. But a collider may or may not be the best place to look for dark matter. It’s out there in the galaxies, but we don’t see it here on Earth.
Still, I am going to try. If dark matter has any interactions with the known particles, it can be produced via collisions at the LHC. But weakly interacting dark matter would not leave a visible signature in our detector at ATLAS, so we have to intuit its existence from what we actually see. Right now, I am concentrating on finding hints of dark matter in the form of missing energy and momentum in a collision that produces a single Higgs boson.
What else have you been working on?
Our most important task is to understand the properties of the Higgs boson, which is a completely new kind of particle. The Higgs is more symmetric than any other particle we know about; it’s the first particle that we have discovered without any spin. My group and I were major contributors to the very recent measurement of Higgs bosons interacting with top quarks. That observation was extremely challenging. We examined five years of collision data, and my team worked intensively on advanced machine-learning techniques and statistics.
In addition to studying the Higgs and searching for dark matter, my group and I also contributed to the silicon pixel detector, to the trigger system , and to the computing system in the ATLAS detector. We are now improving these during the shutdown and upgrade of the LHC. We are also very excited about the near future, because we plan to start using quantum computing to do our data analysis.
Do you have any advice for young physicists just starting their careers?
Some of the young experimentalists today are a bit too conservative. In other words, they are afraid to do something that is not in the mainstream. They fear doing something risky and not getting a result. I don’t blame them. It’s the way the culture is. My advice to them is to figure out what the most important experiments are and then be persistent. Good experiments always take time.
But not everyone gets to take that time.
Right. Young students don’t always have the freedom to be very innovative, unless they can do it in a very short amount of time and be successful. They don’t always get to be patient and just explore. They need to be recognized by their collaborators. They need people to write them letters of recommendation.
The only thing that you can do is work hard. But I also tell my students, “Communicate. Don’t close yourselves off. Try to come up with good ideas on your own but also in groups. Try to innovate. Nothing will be easy. But it is all worth it to discover something new.”
Correction July 18, 2018: Due to miscommunication, this article was slightly revised to more accurately reflect Wu’s view of the current state of particle physics.
get 2 kno ur complexity classes
How fundamentally difficult is a problem? That’s the basic task of computer scientists who hope to sort problems into what are called complexity classes. These are groups that contain all the computational problems that require less than some fixed amount of a computational resource — something like time or memory. Take a toy example featuring a large number such as 123,456,789,001. One might ask: Is this number prime, divisible only by 1 and itself? Computer scientists can solve this using fast algorithms — algorithms that don’t bog down as the number gets arbitrarily large. In our case, 123,456,789,001 is not a prime number. Then we might ask: What are its prime factors? Here no such fast algorithm exists — not unless you use a quantum computer. Therefore computer scientists believe that the two problems are in different complexity classes.
Many different complexity classes exist, though in most cases researchers haven’t been able to prove one class is categorically distinct from the others. Proving those types of categorical distinctions is among the hardest and most important open problems in the field. That’s why the new result I wrote about last month in Quanta was considered such a big deal: In a paper published at the end of May, two computer scientists proved (with a caveat) that the two complexity classes that represent quantum and classical computers really are different.
The differences between complexity classes can be subtle or stark, and keeping the classes straight is a challenge. For that reason, Quanta has put together this primer on seven of the most fundamental complexity classes. May you never confuse BPP and BQP again.
Stands for: Polynomial time
Short version: All the problems that are easy for a classical (meaning nonquantum) computer to solve.
Precise version: Algorithms in P must stop and give the right answer in at most nc time where n is the length of the input and c is some constant.
• Is a number prime?
• What’s the shortest path between two points?
What researchers want to know: Is P the same thing as NP? If so, it would upend computer science and render most cryptography ineffective overnight. (Almost no one thinks this is the case.)
Stands for: Nondeterministic Polynomial time
Short version: All problems that can be quickly verified by a classical computer once a solution is given.
Precise version: A problem is in NP if, given a “yes” answer, there is a short proof that establishes the answer is correct. If the input is a string, X, and you need to decide if the answer is “yes,” then a short proof would be another string, Y, that can be used to verify in polynomial time that the answer is indeed “yes.” (Y is sometimes referred to as a “short witness” — all problems in NP have “short witnesses” that allow them to be verified quickly.)
• The clique problem. Imagine a graph with edges and nodes — for example, a graph where nodes are individuals on Facebook and two nodes are connected by an edge if they’re “friends.” A clique is a subset of this graph where all the people are friends with all the others. One might ask of such a graph: Is there a clique of 20 people? 50 people? 100? Finding such a clique is an “NP-complete” problem, meaning that it has the highest complexity of any problem in NP. But if given a potential answer — a subset of 50 nodes that may or may not form a clique — it’s easy to check.
• The traveling salesman problem. Given a list of cities with distances between each pair of cities, is there a way to travel through all the cities in less than a certain number of miles? For example, can a traveling salesman pass through every U.S. state capital in less than 11,000 miles?
What researchers want to know: Does P = NP? Computer scientists are nowhere near a solution to this problem.
Stands for: Polynomial Hierarchy
Short version: PH is a generalization of NP — it contains all the problems you get if you start with a problem in NP and add additional layers of complexity.
Precise version: PH contains problems with some number of alternating “quantifiers” that make the problems more complex. Here’s an example of a problem with alternating quantifiers: Given X, does there exist Y such that for every Z there exists W such that R happens? The more quantifiers a problem contains, the more complex it is and the higher up it is in the polynomial hierarchy.
• Determine if there exists a clique of size 50 but no clique of size 51.
What researchers want to know: Computer scientists have not been able to prove that PH is different from P. This problem is equivalent to the P versus NP problem because if (unexpectedly) P = NP, then all of PH collapses to P (that is, P = PH).
Stands for: Polynomial Space
Short version: PSPACE contains all the problems that can be solved with a reasonable amount of memory.
Precise version: In PSPACE you don’t care about time, you care only about the amount of memory required to run an algorithm. Computer scientists have proven that PSPACE contains PH, which contains NP, which contains P.
• Every problem in P, NP and PH is in PSPACE.
What researchers want to know: Is PSPACE different from P?
Stands for: Bounded-error Quantum Polynomial time
Short version: All problems that are easy for a quantum computer to solve.
Precise version: All problems that can be solved in polynomial time by a quantum computer.
• Identify the prime factors of an integer.
What researchers want to know: Computer scientists have proven that BQP is contained in PSPACE and that BQP contains P. They don’t know whether BQP is in NP, but they believe the two classes are incomparable: There are problems that are in NP and not BQP and vice versa.
Stands for: Exponential Time
Short version: All the problems that can be solved in an exponential amount of time by a classical computer.
Precise version: EXP contains all the previous classes — P, NP, PH, PSPACE and BQP. Researchers have proven that it’s different from P — they have found problems in EXP that are not in P.
• Generalizations of games like chess and checkers are in EXP. If a chess board can be any size, it becomes a problem in EXP to determine which player has the advantage in a given board position.
What researchers want to know: Computer scientists would like to be able to prove that PSPACE does not contain EXP. They believe there are problems that are in EXP that are not in PSPACE, because sometimes in EXP you need a lot of memory to solve the problems. Computer scientists know how to separate EXP and P.
Stands for: Bounded-error Probabilistic Polynomial time
Short version: Problems that can be quickly solved by algorithms that include an element of randomness.
Precise version: BPP is exactly the same as P, but with the difference that the algorithm is allowed to include steps where its decision-making is randomized. Algorithms in BPP are required only to give the right answer with a probability close to 1.
• You’re handed two different formulas that each produce a polynomial that has many variables. Do the formulas compute the exact same polynomial? This is called the polynomial identity testing problem.
What researchers want to know: Computer scientists would like to know whether BPP = P. If that is true, it would mean that every randomized algorithm can be de-randomized. They believe this is the case — that there is an efficient deterministic algorithm for every problem for which there exists an efficient randomized algorithm — but they have not been able to prove it.
WHY AM I SO ALONE??? Oh wait, science can explain.
I’m late to posting this, but it’s important enough to be worth sharing anyway: Sandberg, Drexler, and Ord on Dissolving the Fermi Paradox.
(You may recognize these names: Toby Ord founded the effective altruism movement; Eric Drexler kindled interest in nanotechnology; Anders Sandberg helped pioneer the academic study of x-risk, and wrote what might be my favorite Unsong fanfic)
The Fermi Paradox asks: given the immense number of stars in our galaxy, for even a very tiny chance of aliens per star shouldn’t there should be thousands of nearby alien civilizations? But any alien civilization that arose millions of years ago would have had ample time to colonize the galaxy or do something equally dramatic that would leave no doubt as to its existence. So where are they?
This is sometimes formalized as the Drake Equation: think up all the parameters you would need for an alien civilization to contact us, multiply our best estimates for all of them together, and see how many alien civilizations we predict. So for example if we think there’s a 10% chance of each star having planets, a 10% chance of each planet being habitable to life, and a 10% chance of a life-habitable planet spawning an alien civilization by now, one in a thousand stars should have civilization. The actual Drake Equation is much more complicated, but most people agree that our best-guess values for most parameters suggest a vanishingly small chance of the empty galaxy we observe.
SDO’s contribution is to point out this is the wrong way to think about it. Sniffnoy’s comment on the subreddit helped me understand exactly what was going on, which I think is something like this:
Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilization. If it came up tails, He made none besides Earth. Using our one parameter Drake Equation, we determine that on average there should be 5 billion alien civilizations. Since we see zero, that’s quite the paradox, isn’t it?
No. In this case the mean is meaningless. It’s not at all surprising that we see zero alien civilizations, it just means the coin must have landed tails.
SDO say that relying on the Drake Equation is the same kind of error. We’re not interested in the average number of alien civilizations, we’re interested in the distribution of probability over number of alien civilizations. In particular, what is the probability of few-to-none?
SDO solve this with a “synthetic point estimate” model, where they choose random points from the distribution of possible estimates suggested by the research community, run the simulation a bunch of times, and see how often it returns different values.
According to their calculations, a standard Drake Equation multiplying our best estimates for every parameter together yields a probability of less than one in a million billion billion billion that we’re alone in our galaxy – making such an observation pretty paradoxical. SDO’s own method, taking account parameter uncertainty into account, yields a probability of one in three.
They try their hand at doing a Drake calculation of their own, using their preferred values, and find:
N is the average number of civilizations per galaxy
If this is right – and we can debate exact parameter values forever, but it’s hard to argue with their point-estimate-vs-distribution-logic – then there’s no Fermi Paradox. It’s done, solved, kaput. Their title, “Dissolving The Fermi Paradox”, is a strong claim, but as far as I can tell they totally deserve it.
“Why didn’t anyone think of this before?” is the question I am only slightly embarrassed to ask given that I didn’t think of it before. I don’t know. Maybe people thought of it before, but didn’t publish it, or published it somewhere I don’t know about? Maybe people intuitively figured out what was up (one of the parameters of the Drake Equation must be much lower than our estimate) but stopped there and didn’t bother explaining the formal probability argument. Maybe nobody took the Drake Equation seriously anyway, and it’s just used as a starting point to discuss the probability of life forming?
But any explanation of the “oh, everyone knew this in some sense already” sort has to deal with that a lot of very smart and well-credentialled experts treated the Fermi Paradox very seriously and came up with all sorts of weird explanations. There’s no need for sci-fi theories any more (though you should still read the Dark Forest trilogy). It’s just that there aren’t very many aliens. I think my past speculations on this, though very incomplete and much inferior to the recent paper, come out pretty well here.
(some more discussion here on Less Wrong)
One other highlight hidden in the supplement: in the midst of a long discussion on the various ways intelligent life can fail to form, starting on page 6 the authors speculate on “alternative genetic systems”. If a planet gets life with a slightly different way of encoding genes than our own, it might be too unstable to allow complex life, or too stable to allow a reasonable rate of mutation by natural selection. It may be that abiogenesis can only create very weak genetic codes, and life needs to go through several “genetic-genetic transitions” before it can reach anything capable of complex evolution. If this is path-dependent – ie there are branches that are local improvements but close off access to other better genetic systems – this could permanently arrest the development of life, or freeze it at an evolutionary rate so low that the history of the universe so far is too short a time to see complex organisms.
I don’t claim to understand all of this, but the parts I do understand are fascinating and could easily be their own paper.
Artificial intelligence owes a lot of its smarts to Judea Pearl. In the 1980s he led efforts that allowed machines to reason probabilistically. Now he’s one of the field’s sharpest critics. In his latest book, “The Book of Why: The New Science of Cause and Effect,” he argues that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.
Three decades ago, a prime challenge in artificial intelligence research was to program machines to associate a potential cause to a set of observable conditions. Pearl figured out how to do that using a scheme called Bayesian networks. Bayesian networks made it practical for machines to say that, given a patient who returned from Africa with a fever and body aches, the most likely explanation was malaria. In 2011 Pearl won the Turing Award, computer science’s highest honor, in large part for this work.
But as Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently.
In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions — to inquire how the causal relationships would change given some kind of intervention — which Pearl views as the cornerstone of scientific thought. Pearl also proposes a formal language in which to make this kind of thinking possible — a 21st-century version of the Bayesian framework that allowed machines to think probabilistically.
Pearl expects that causal reasoning could provide machines with human-level intelligence. They’d be able to communicate with humans more effectively and even, he explains, achieve status as moral entities with a capacity for free will — and for evil. Quanta Magazine sat down with Pearl at a recent conference in San Diego and later held a follow-up interview with him by phone. An edited and condensed version of those conversations follows.
Why is your new book called “The Book of Why”?
It means to be a summary of the work I’ve been doing the past 25 years about cause and effect, what it means in one’s life, its applications, and how we go about coming up with answers to questions that are inherently causal. Oddly, those questions have been abandoned by science. So I’m here to make up for the neglect of science.
That’s a dramatic thing to say, that science has abandoned cause and effect. Isn’t that exactly what all of science is about?
Of course, but you cannot see this noble aspiration in scientific equations. The language of algebra is symmetric: If X tells us about Y, then Y tells us about X. I’m talking about deterministic relationships. There’s no way to write in mathematics a simple fact — for example, that the upcoming storm causes the barometer to go down, and not the other way around.
Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Y causes X. It sounds like a terrible thing to say against science, I know. If I were to say it to my mother, she’d slap me.
But science is more forgiving: Seeing that we lack a calculus for asymmetrical relations, science encourages us to create one. And this is where mathematics comes in. It turned out to be a great thrill for me to see that a simple calculus of causation solves problems that the greatest statisticians of our time deemed to be ill-defined or unsolvable. And all this with the ease and fun of finding a proof in high-school geometry.
You made your name in AI a few decades ago by teaching machines how to reason probabilistically. Explain what was going on in AI at the time.
The problems that emerged in the early 1980s were of a predictive or diagnostic nature. A doctor looks at a bunch of symptoms from a patient and wants to come up with the probability that the patient has malaria or some other disease. We wanted automatic systems, expert systems, to be able to replace the professional — whether a doctor, or an explorer for minerals, or some other kind of paid expert. So at that point I came up with the idea of doing it probabilistically.
Unfortunately, standard probability calculations required exponential space and exponential time. I came up with a scheme called Bayesian networks that required polynomial time and was also quite transparent.
Yet in your new book you describe yourself as an apostate in the AI community today. In what sense?
In the sense that as soon as we developed tools that enabled machines to reason with uncertainty, I left the arena to pursue a more challenging task: reasoning with cause and effect. Many of my AI colleagues are still occupied with uncertainty. There are circles of research that continue to work on diagnosis without worrying about the causal aspects of the problem. All they want is to predict well and to diagnose well.
I can give you an example. All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.
I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.
People are excited about the possibilities for AI. You’re not?
As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.
The way you talk about curve fitting, it sounds like you’re not very impressed with machine learning.
No, I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step. We also want to conduct some communication with a machine that is meaningful, and meaningful means matching our intuition. If you deprive the robot of your intuition about cause and effect, you’re never going to communicate meaningfully. Robots could not say “I should have done better,” as you and I do. And we thus lose an important channel of communication.
What are the prospects for having machines that share our intuition about cause and effect?
We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.
The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.
Robots, too, will communicate with each other and will translate this hypothetical world, this wild world, of metaphorical models.
When you share these ideas with people working in AI today, how do they react?
AI is currently split. First, there are those who are intoxicated by the success of machine learning and deep learning and neural nets. They don’t understand what I’m talking about. They want to continue to fit curves. But when you talk to people who have done any work in AI outside statistical learning, they get it immediately. I have read several papers written in the past two months about the limitations of machine learning.
Are you suggesting there’s a trend developing away from machine learning?
Not a trend, but a serious soul-searching effort that involves asking: Where are we going? What’s the next step?
That was the last thing I wanted to ask you.
I’m glad you didn’t ask me about free will.
In that case, what do you think about free will?
We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable.
In what way?
You have the sensation of free will; evolution has equipped us with this sensation. Evidently, it serves some computational function.
Will it be obvious when robots have free will?
I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t. So the first sign will be communication; the next will be better soccer.
Now that you’ve brought up free will, I guess I should ask you about the capacity for evil, which we generally think of as being contingent upon an ability to make choices. What is evil?
It’s the belief that your greed or grievance supersedes all standard norms of society. For example, a person has something akin to a software module that says “You are hungry, therefore you have permission to act to satisfy your greed or grievance.” But you have other software modules that instruct you to follow the standard laws of society. One of them is called compassion. When you elevate your grievance above those universal norms of society, that’s evil.
So how will we know when AI is capable of committing evil?
When it is obvious for us that there are software components that the robot ignores, consistently ignores. When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.
Evo-bio and viruses, sitting in a tree
Andrew Read became a scientist so he could spend more time in nature, but he never imagined that would mean a commercial chicken farm. Read, a disease ecologist who directs the Pennsylvania State University Center for Infectious Disease Dynamics, and his research assistant Chris Cairns meandered their way through a hot, humid, pungent-smelling barn teeming with 30,000 young broiler chickens deep in the Pennsylvania countryside. Covered head to toe in white coveralls, the two men periodically stopped and crouched, collecting dust from the ground with gloved hands. Birds squawked and scuttered away. The men transferred the dust into small plastic tubes, which they capped and placed in plastic bags to bring back to the laboratory. “Funny where science leads you,” Read said.
Read and his colleagues are studying how the herpesvirus that causes Marek’s disease — a highly contagious, paralyzing and ultimately deadly ailment that costs the chicken industry more than $2 billion a year — might be evolving in response to its vaccine. Its latest vaccine, that is. Marek’s disease has been sickening chickens globally for over a century; birds catch it by inhaling dust laden with viral particles shed in other birds’ feathers. The first vaccine was introduced in 1970, when the disease was killing entire flocks. It worked well, but within a decade, the vaccine mysteriously began to fail; outbreaks of Marek’s began erupting in flocks of inoculated chickens. A second vaccine was licensed in 1983 in the hopes of solving the problem, yet it, too, gradually stopped working. Today, the poultry industry is on its third vaccine. It still works, but Read and others are concerned it might one day fail, too — and no fourth-line vaccine is waiting. Worse, in recent decades, the virus has become more deadly.
Read and others, including researchers at the U.S. Department of Agriculture, posit that the virus that causes Marek’s has been changing over time in ways that helped it evade its previous vaccines. The big question is whether the vaccines directly incited these changes or the evolution happened, coincidentally, for other reasons, but Read is pretty sure the vaccines have played a role. In a 2015 paper in PLOS Biology, Read and his colleagues vaccinated 100 chickens, leaving 100 others unvaccinated. They then infected all the birds with strains of Marek’s that varied in how virulent — as in how dangerous and infectious — they were. The team found that, over the course of their lives, the unvaccinated birds shed far more of the least virulent strains into the environment, whereas the vaccinated birds shed far more of the most virulent strains. The findings suggest that the Marek’s vaccine encourages more dangerous viruses to proliferate. This increased virulence might then give the viruses the means to overcome birds’ vaccine-primed immune responses and sicken vaccinated flocks.
Most people have heard of antibiotic resistance. Vaccine resistance, not so much. That’s because drug resistance is a huge global problem that annually kills nearly 25,000 people in the United States and in Europe, and more than twice that many in India. Microbes resistant to vaccines, on the other hand, aren’t a major menace. Perhaps they never will be: Vaccine programs around the globe have been and continue to be immensely successful at preventing infections and saving lives.
Recent research suggests, however, that some pathogen populations are adapting in ways that help them survive in a vaccinated world, and that these changes come about in a variety of ways. Just as the mammal population exploded after dinosaurs went extinct because a big niche opened up for them, some microbes have swept in to take the place of competitors eliminated by vaccines.
Immunization is also making once-rare or nonexistent genetic variants of pathogens more prevalent, presumably because vaccine-primed antibodies can’t as easily recognize and attack shape-shifters that look different from vaccine strains. And vaccines being developed against some of the world’s wilier pathogens — malaria, HIV, anthrax — are based on strategies that could, according to evolutionary models and lab experiments, encourage pathogens to become even more dangerous.
Evolutionary biologists aren’t surprised that this is happening. A vaccine is a novel selection pressure placed on a pathogen, and if the vaccine does not eradicate its target completely, then the remaining pathogens with the greatest fitness — those able to survive, somehow, in an immunized world — will become more common. “If you don’t have these pathogens evolving in response to vaccines,” said Paul Ewald, an evolutionary biologist at the University of Louisville, “then we really don’t understand natural selection.”
Yet don’t mistake these findings as evidence that vaccines are dangerous or that they are bound to fail — because undesirable outcomes can be thwarted by using our knowledge of natural selection, too. Evolution might be inevitable, but it can be coaxed in the right direction.
Vaccine science is brow-furrowingly complicated, but the underlying mechanism is simple. A vaccine exposes your body to either live but weakened or killed pathogens, or even just to certain bits of them. This exposure incites your immune system to create armies of immune cells, some of which secrete antibody proteins to recognize and fight off the pathogens if they ever invade again.
That said, many vaccines don’t provide lifelong immunity, for a variety of reasons. A new flu vaccine is developed every year because influenza viruses naturally mutate quickly. Vaccine-induced immunity can also wane over time. After being inoculated with the shot for typhoid, for instance, a person’s levels of protective antibodies drop over several years, which is why public health agencies recommend regular boosters for those living in or visiting regions where typhoid is endemic. Research suggests a similar drop in protection over time occurs with the mumps vaccine, too.
Vaccine failures caused by vaccine-induced evolution are different. These drops in vaccine effectiveness are incited by changes in pathogen populations that the vaccines themselves directly cause. Scientists have recently started studying the phenomenon in part because they finally can: Advances in genetic sequencing have made it easier to see how microbes change over time. And many such findings have reinforced just how quickly pathogens mutate and evolve in response to environmental cues.
Viruses and bacteria change quickly in part because they replicate like mad. Three days after a bird is bitten by a mosquito carrying West Nile virus, one milliliter of its blood contains 100 billion viral particles, roughly the number of stars in the Milky Way. And with each replication comes the opportunity for genetic change. When an RNA virus replicates, the copying process generates one new error, or mutation, per 10,000 nucleotides, a mutation rate as much as 100,000 times greater than that found in human DNA. Viruses and bacteria also recombine, or share genetic material, with similar strains, giving them another way to change their genomes rapidly. Just as people — with the exception of identical twins — all have distinctive genomes, pathogen populations tend to be composed of myriad genetic variants, some of which fare better than others during battles with vaccine-trained antibodies. The victors seed the pathogen population of the future.
The bacteria that cause pertussis, better known as whooping cough, illustrate how this can happen. In 1992, recommendations from the U.S. Centers for Disease Control and Prevention (CDC) began promoting a new vaccine to prevent the infection, which is caused by bacteria called Bordetella pertussis. The old vaccine was made using whole killed bacteria, which incited an effective immune response but also caused rare side effects, such as seizures. The new version, known as the “acellular” vaccine, contained just two to five outer membrane proteins isolated from the pathogen.
The unwanted side effects disappeared but were replaced by new, unexpected problems. First, for unclear reasons, protection conferred by the acellular vaccine waned over time. Epidemics began to erupt around the world. In 2001, scientists in the Netherlands proposed an additional reason for the resurgence: Perhaps vaccination was inciting evolution, causing strains of the bacteria that lacked the targeted proteins, or had different versions of them, to survive preferentially.
Studies have since backed up this idea. In a 2014 paper published in Emerging Infectious Diseases, researchers in Australia, led by the medical microbiologist Ruiting Lan at the University of New South Wales, collected and sequenced B. pertussis samples from 320 patients between 2008 and 2012. The percentage of bacteria that did not express pertactin, a protein targeted by the acellular vaccine, leapt from 5 percent in 2008 to 78 percent in 2012, which suggests that selection pressure from the vaccine was enabling pertactin-free strains to become more common. In the U.S., nearly all circulating viruses lack pertactin, according to a 2017 CDC paper. “I think pretty much everyone agrees pertussis strain variation is shaped by vaccination,” Lan said.
Hepatitis B, a virus that causes liver damage, tells a similar story. The current vaccine, which principally targets a portion of the virus known as the hepatitis B surface antigen, was introduced in the U.S. in 1989. A year later, in a paper published in the Lancet, researchers described odd results from a vaccine trial in Italy. They had detected circulating hepatitis B viruses in 44 vaccinated subjects, but in some of them, the virus was missing part of that targeted antigen. Then, in a series of studies conducted in Taiwan, researchers sequenced the viruses that infected children who had tested positive for hepatitis B. They reported that the prevalence of these viral “escape mutants,” as they called them, that lacked the surface antigen had increased from 7.8 percent in 1984 to 23.1 percent in 1999.
Some research suggests, however, that these mutant strains aren’t stable and that they may not pose much of a risk. Indeed, fewer and fewer people catch hepatitis B every year worldwide. As physicians at the Icahn School of Medicine at Mount Sinai in New York summarized in a 2016 paper, “the clinical significance of hepatitis B surface antigen escape mutations remains controversial.”
Scientists usually have to design their own experiments. But in 2000 or so, it dawned on Bill Hanage that society was designing one for him. Hanage, who had just completed his Ph.D. in pathology, had always been fascinated by bacteria and evolutionary biology. And something evolutionarily profound was about to happen to bacteria in America.
A new vaccine called Prevnar 7 was soon to be recommended for all U.S. children to prevent infections caused by Streptococcus pneumoniae, bacteria responsible for many cases of pneumonia, ear infections, meningitis and other illnesses among the elderly and young children. To date, scientists have discovered more than 90 distinct S. pneumoniae serotypes — groups that share distinctive immunological features on their cell surface — and Prevnar 7 targeted the seven serotypes that caused the brunt of serious infections. But Hanage, along with researchers, wondered what was going to happen to the more than 80 others. “It struck me, with my almost complete lack of formal training in evolutionary biology, that this was an extraordinary evolutionary experiment,” he said.
Hanage teamed up with Marc Lipsitch, an epidemiologist and microbiologist who had recently left Emory University for Harvard, and together the scientists — now both at Harvard — have been watching the pneumococcal population adapt to this new selection pressure. They and others have reported that while Prevnar 7 almost completely eliminated infections with the seven targeted serotypes, the other, rarer serotypes quickly swept in to take their place, including a serotype called 19A, which began causing a large proportion of serious pneumococcal infections. In response, in 2010, the U.S. introduced a new vaccine, Prevnar 13, which targets 19A and five additional serotypes. Previously unseen serotypes have again flourished in response. A 2017 paper in Pediatrics compared the situation to a high-stakes game of whack-a-mole. In essence, vaccination has completely restructured the pathogen population, twice.
Overall, the incidence of invasive pneumococcal infections in the U.S. has dropped dramatically among children and adults as a result of Prevnar 13. It is saving many American lives, presumably because it targets the subset of serotypes most likely to cause infections. But data from England and Wales are not so rosy. Although infections in kids there have dropped, invasive pneumococcal infections have been steadily increasing in older adults and are much higher now than they were before Prevnar 7 was introduced. As for why this is happening, “I don’t think we know,” Hanage said. “But I do think that we might somewhat reasonably suggest that the serotypes that are now being carried by children are inadvertently better able to cause disease in adults, which is something we would not have known before, because they were comparatively rare.”
One can think about vaccination as a kind of sieve, argues Troy Day, a mathematical evolutionary biologist at Queen’s University in Ontario, Canada. This sieve prevents many pathogens from passing through and surviving, but if a few squeeze by, those in that nonrandom sample will preferentially survive, replicate and ultimately shift the composition of the pathogen population. The ones squeezing through might be escape mutants with genetic differences that allow them to shrug off or hide from vaccine-primed antibodies, or they may simply be serotypes that weren’t targeted by the vaccine in the first place, like lucky criminals whose drug dens were overlooked during a night of citywide police raids. Either way, the vaccine quietly alters the genetic profile of the pathogen population.
Tipping the Scales
Just as pathogens have different ways of infecting and affecting us, the vaccines that scientists develop employ different immunological strategies. Most of the vaccines we get in childhood prevent pathogens from replicating inside us and thereby also prevent us from transmitting the infections to others. But scientists have so far been unable to make these kinds of sterilizing vaccines for complicated pathogens like HIV, anthrax and malaria. To conquer these diseases, some researchers have been developing immunizations that prevent disease without actually preventing infections — what are called “leaky” vaccines. And these new vaccines may incite a different, and potentially scarier, kind of microbial evolution.
Virulence, as a trait, is directly related to replication: The more pathogens that a person’s body houses, the sicker that person generally becomes. A high replication rate has evolutionary advantages — more microbes in the body lead to more microbes in snot or blood or stool, which gives the microbes more chances to infect others — but it also has costs, as it can kill hosts before they have the chance to pass on their infection. The problem with leaky vaccines, Read says, is that they enable pathogens to replicate unchecked while also protecting hosts from illness and death, thereby removing the costs associated with increased virulence. Over time, then, in a world of leaky vaccinations, a pathogen might evolve to become deadlier to unvaccinated hosts because it can reap the benefits of virulence without the costs — much as Marek’s disease has slowly become more lethal to unvaccinated chickens. This virulence can also cause the vaccine to start failing by causing illness in vaccinated hosts.
In addition to Marek’s disease, Read has been studying malaria, which is the target of several leaky vaccines currently in development. In a 2012 paper published in PLOS Biology, Read and Vicki Barclay, his postdoc at the time, inoculated mice with a component of several leaky malaria vaccines currently being tested in clinical trials. They then used these infected-but-not-sick mice to infect other vaccinated mice. After the parasites circulated through 21 rounds of vaccinated mice, Barclay and Read studied them and compared them to malaria parasites that had circulated through 21 rounds of unvaccinated mice. The strains from the vaccinated mice, they found, had grown far more virulent, in that they replicated faster and killed more red blood cells. At the end of 21 rounds of infection, these more quickly growing, deadly parasites were the only ones left.
If this all sounds terribly scary, keep a few things in mind. Many pathogens, including measles, do not seem to be evolving as a population in response to their vaccines. Second, experimental data from a lab, such as the malaria study described above, don’t necessarily predict what will happen in the much more complex landscape of the real world. And third, researchers concerned with vaccine-driven evolution stress that the phenomenon is not in any way an argument against vaccination or its value; it’s just a consequence that needs to be considered, and one that can potentially be avoided. By thinking through how a pathogen population might respond to a vaccine, scientists can potentially make tweaks before it happens. They might even be able to design vaccines that encourage pathogens to become less dangerous over time.
In March 2017, Read and his Penn State colleague David Kennedy published a paper in the Proceedings of the Royal Society B in which they outlined several strategies that vaccine developers could use to ensure that future vaccines don’t get punked by evolutionary forces. One overarching recommendation is that vaccines should induce immune responses against multiple targets. A number of successful, seemingly evolution-proof vaccines already work this way: After people get inoculated with a tetanus shot, for example, their blood contains 100 types of unique antibodies, all of which fight the bacteria in different ways. In such a situation, it becomes much harder for a pathogen to accumulate all the changes needed to survive. It also helps if vaccines target all the known subpopulations of a particular pathogen, not just the most common or dangerous ones. Richard Malley and other researchers at Boston Children’s Hospital are, for instance, trying to develop a universal pneumococcal vaccine that is not serotype-specific.
Vaccines should also bar pathogens from replicating and transmitting inside inoculated hosts. One of the reasons that vaccine resistance is less of a problem than antibiotic resistance, Read and Kennedy posit, is that antibiotics tend to be given after an infection has already taken hold — when the pathogen population inside the host is already large and genetically diverse and might include mutants that can resist the drug’s effects. Most vaccines, on the other hand, are administered before infection and limit replication, which minimizes evolutionary opportunities.
But the most crucial need right now is for vaccine scientists to recognize the relevance of evolutionary biology to their field. Last month, when more than 1,000 vaccine scientists gathered in Washington, D.C., at the World Vaccine Congress, the issue of vaccine-induced evolution was not the focus of any scientific sessions. Part of the problem, Read says, is that researchers are afraid: They’re nervous to talk about and call attention to potential evolutionary effects because they fear that doing so might fuel more fear and distrust of vaccines by the public — even though the goal is, of course, to ensure long-term vaccine success. Still, he and Kennedy feel researchers are starting to recognize the need to include evolution in the conversation. “I think the scientific community is becoming increasingly aware that vaccine resistance is a real risk,” Kennedy said.
“I think so too,” Read agreed, “but there is a long way to go.”
Correction: On May 10, the caption for the second photograph was updated: It originally misnamed Chris Cairns as “Chris Gaines.”
In the United States, giving prisoners the right to vote is not an especially popular idea. Less than one-third of Americans support it, and only Vermont and Maine allow voting by the incarcerated. The Democratic Party does not advocate it; in their 2016 platform, which was widely considered very progressive, the party promised to “restore voting rights for those who have served their sentences” but did not comment on those who are still serving their sentences.
But why should letting prisoners vote be such a fringe position? In a new report from the People’s Policy Project, entitled “Full Human Beings,” Emmett Sanders argues that universal enfranchisement should mean exactly that: If prisoners are still full human beings, then they cannot rightfully be excluded from the democratic process. No matter how much people might instinctually feel that someone has “sacrificed their rights through their conduct,” we shouldn’t strip the basic elements of citizenship from someone merely because they have committed a wrong.
Many other countries recognize this. The Universal Declaration of Human Rights states that “everyone has a right to take part in the government of his [sic] country” and the International Covenant on Civil and Political Rights provides for “universal and equal suffrage.” In many European countries prisoners can vote, and the European Court of Human Rights repeatedly condemned the United Kingdom’s ban on prisoner voting. (The U.K. government ignored the ruling for years before settling on a weak compromise.)
The argument against prisoner voting is simple (one might even say simple-minded). Sanders quotes an opponent: “If you won’t follow the law yourself, then you can’t make the law for everyone else, which is what you do – directly or indirectly – when you vote.” But as Sanders points out, in practice this means that stealing a TV remote means you’re not allowed to express your preference on war or tax policy or abortion. The argument rests on collapsing “the law” into a single entity: Because I violated a law, I am no longer allowed to affect any law, even if the law I violated was relatively trivial and the law I’d like to oppose is, say, a repeal of the First Amendment. And, tedious as it may be to pull the Martin Luther King card, if we’re going to argue that “not following the law” is the criterion for disenfranchisement, well, civil disobedience would mean getting stripped of the basic rights of citizenship.
Sanders points out that the impact of America’s prisoner disenfranchisement policies is immense. This is, in part, because of our obscene number of prisoners:
That means that elections outcomes could very easily be swayed by the enfranchisement of prisoners. (Though obviously this is a good reason for the Republican Party to staunchly defend existing policy.) But it also affects the distribution of representation in other ways: Because the census counts prisoners as being resident in the location of the prison, rather than at their home address, population numbers are artificially inflated in rural areas and deflated in urban ones. In extreme enough quantities, this can affect how state voting districts are drawn, even though the populations being counted can’t actually vote.
The racial dimensions of this become extremely troubling. First, we know that because the 13th Amendment contains a loophole allowing people to be enslaved if they have been convicted of a crime, white supremacists gradually reasserted their power by aggressively policing black people in order to deprive them of their rights. There is a disturbing historical precedent for stripping rights away from those convicted of crimes, and Sanders cites an old Mississippi court ruling that even though the state was “Restrained by the federal constitution from discriminating against the negro race” it could still “discriminate against its characteristics, and the offenses to which its criminal members are prone.” You can’t take the vote away from black people, but you can take the vote away from criminals, and if the prisons just so happen to end up filled with black people, it is a purely innocuous coincidence.
The racial impacts go beyond the specific deprivation of those currently incarcerated. The political representation of black communities as a whole is reduced by the disenfranchisement of a large swath of their populations, and people who have never been given the chance to cultivate the habits of voting and civic participation are less likely to pass those onto their children. Everyone should be troubled by the specifically racial effects of this policy, given the historical context and the extreme numbers of people the U.S. has incarcerated and thereby deprived of a democratic voice.
It is disappointing that the Democratic Party has not stood up for universal suffrage. Obviously felons should have their rights restored upon completing their sentence. (One opponent says that since we restrict felons from having guns we can restrict them from having votes, but a gun is generally far more obviously dangerous than a vote, unless the vote happens to be for Donald Trump.) But the broader point is that there shouldn’t be moral character tests for voting, period. We live in a country where people do things wrong. When they do things wrong, they are punished. But they do not thereby become “non-people” who lose all of their basic rights and obligations. (Voting can be thought of as an obligation as well as a right, which makes it even stranger that prisoners are kept from doing it. Do people vote for pleasure?) The left should take a very simple position on suffrage: Universal means universal.
If you appreciate our work, please consider making a donation or purchasing a subscription. Current Affairs is not for profit and carries no outside advertising. We are an independent media institution funded entirely by subscribers and small donors, and we depend on you in order to continue to produce high-quality work.
physics is cool
Miguel Zumalacárregui knows what it feels like when theories die. In September 2017, he was at the Institute for Theoretical Physics in Saclay, near Paris, to speak at a meeting about dark energy and modified gravity. The official news had not yet broken about an epochal astronomical measurement — the detection, by gravitational wave detectors as well as many other telescopes, of a collision between two neutron stars — but a controversial tweet had lit a firestorm of rumor in the astronomical community, and excited researchers were discussing the discovery in hushed tones.
Zumalacárregui, a theoretical physicist at the Berkeley Center for Cosmological Physics, had been studying how the discovery of a neutron-star collision would affect so-called “alternative” theories of gravity. These theories attempt to overcome what many researchers consider to be two enormous problems with our understanding of the universe. Observations going back decades have shown that the universe appears to be filled with unseen particles — dark matter — as well as an anti-gravitational force called dark energy. Alternative theories of gravity attempt to eliminate the need for these phantasms by modifying the force of gravity in such a way that it properly describes all known observations — no dark stuff required.
At the meeting, Zumalacárregui joked to his audience about the perils of combining science and Twitter, and then explained what the consequences would be if the rumors were true. Many researchers knew that the merger would be a big deal, but a lot of them simply “hadn’t understood their theories were on the brink of demise,” he later wrote in an email. In Saclay, he read them the last rites. “That conference was like a funeral where we were breaking the news to some attendees.”
The neutron-star collision was just the beginning. New data in the months since that discovery have made life increasingly difficult for the proponents of many of the modified-gravity theories that remain. Astronomers have analyzed extreme astronomical systems that contain spinning neutron stars, or pulsars, to look for discrepancies between their motion and the predictions of general relativity — discrepancies that some theories of alternative gravity anticipate. These pulsar systems let astronomers probe gravity on a new scale and with new precision. And with each new observation, these alternative theories of gravity are having an increasingly hard time solving the problems they were invented for. Researchers “have to sweat some more trying to get new physics,” said Anne Archibald, an astrophysicist at the University of Amsterdam.
Searching for Vulcan
Confounding observations have a way of leading astronomers to desperate explanations. On the afternoon of March 26, 1859, Edmond Lescarbault, a young doctor and amateur astronomer in Orgères-en-Beauce, a small village south of Paris, had a break between patients. He rushed to a tiny homemade observatory on the roof of his stone barn. With the help of his telescope, he spotted an unknown round object moving across the face of the sun.
He quickly sent news of this discovery to Urbain Le Verrier, the world’s leading astronomer at the time. Le Verrier had been trying to account for an oddity in the movement of the planet Mercury. All other planets orbit the sun in perfect accord with Isaac Newton’s laws of motion and gravitation, but Mercury appeared to advance a tiny amount with each orbit, a phenomenon known as perihelion precession. Le Verrier was certain that there had to be an invisible “dark” planet tugging on Mercury. Lescarbault’s observation of a dark spot transiting the sun appeared to show that the planet, which Le Verrier named Vulcan, was real.
It was not. Lescarbault’s sightings were never confirmed, and the perihelion precession of Mercury remained a puzzle for nearly six more decades. Then Einstein developed his theory of general relativity, which straightforwardly predicted that Mercury should behave the way it does.
In Le Verrier’s impulse to explain puzzling observations by introducing a heretofore hidden object, some modern-day researchers see parallels to the story of dark matter and dark energy. For decades, astronomers have noticed that the behavior of galaxies and galaxy clusters doesn’t seem to fit the predictions of general relativity. Dark matter is one way to explain that behavior. Likewise, the accelerating expansion of the universe can be thought of as being powered by a dark energy.
All attempts to directly detect dark matter and dark energy have failed, however. That fact “kind of leaves a bad taste in some people’s mouths, almost like the fictional planet Vulcan,” said Leo Stein, a theoretical physicist at the California Institute of Technology. “Maybe we’re going about it all wrong?”
For any alternative theory of gravity to work, it has to not only do away with dark matter and dark energy, but also reproduce the predictions of general relativity in all the standard contexts. “The business of alternative gravity theories is a messy one,” Archibald said. Some would-be replacements for general relativity, like string theory and loop quantum gravity, don’t offer testable predictions. Others “make predictions that are spectacularly wrong, so the theorists have to devise some kind of a screening mechanism to hide the wrong prediction on scales we can actually test,” she said.
The best-known alternative gravity theories are known as modified Newtonian dynamics, commonly abbreviated to MOND. MOND-type theories attempt to do away with dark matter by tweaking our definition of gravity. Astronomers have long observed that the gravitational force due to ordinary matter doesn’t appear to be sufficient to keep rapidly moving stars inside their galaxies. The gravitational pull of dark matter is assumed to make up the difference. But according to MOND, there are simply two kinds of gravity. In regions where the force of gravity is strong, bodies obey Newton’s law of gravity, which states that the gravitational force between two objects decreases in proportion to the square of the distance that separates them. But in environments of extremely weak gravity — like the outer parts of a galaxy — MOND suggests that another type of gravity is in play. This gravity decreases more slowly with distance, which means that it doesn’t weaken as much. “The idea is to make gravity stronger when it should be weaker, like at the outskirts of a galaxy,” Zumalacárregui said.
Then there is TeVeS (tensor-vector-scalar), MOND’s relativistic cousin. While MOND is a modification of Newtonian gravity, TeVeS is an attempt to take the general idea of MOND and make it into a full mathematical theory that can be applied to the universe as a whole — not just to relatively small objects like solar systems and galaxies. It also explains the rotation curves of galaxies by making gravity stronger on their outskirts. But TeVeS does so by augmenting gravity with “scalar” and “vector” fields that “essentially amplify gravity,” said Fabian Schmidt, a cosmologist at the Max Planck Institute for Astrophysics in Garching, Germany. A scalar field is like the temperature throughout the atmosphere: At every point it has a numerical value but no direction. A vector field, by contrast, is like the wind: It has both a value (the wind speed) and a direction.
There are also so-called Galileon theories — part of a class of theories called Horndeski and beyond-Horndeski — which attempt to get rid of dark energy. These modifications of general relativity also introduce a scalar field. There are many of these theories (Brans-Dicke theory, dilaton theories, chameleon theories and quintessence are just some of them), and their predictions vary wildly among models. But they all change the expansion of the universe and tweak the force of gravity. Horndeski theory was first put forward by Gregory Horndeski in 1974, but the wider physics community took note of it only around 2010. By then, Zumalacárregui said, “Gregory Horndeski quit science and a painter in New Mexico.”
There are also stand-alone theories, like that of physicist Erik Verlinde. According to his theory, the laws of gravity arise naturally from the laws of thermodynamics just like “the way waves emerge from the molecules of water in the ocean,” Zumalacárregui said. Verlinde wrote in an email that his ideas are not an “alternative theory” of gravity, but “the next theory of gravity that contains and transcends Einstein’s general relativity.” But he is still developing his ideas. “My impression is that the theory is still not sufficiently worked out to permit the kind of precision tests we carry out,” Archibald said. It’s built on “fancy words,” Zumalacárregui said, “but no mathematical framework to compute predictions and do solid tests.”
The predictions made by other theories differ in some way from those of general relativity. Yet these differences can be subtle, which makes them incredibly difficult to find.
Consider the neutron-star merger. At the same time that the Laser Interferometer Gravitational-Wave Observatory (LIGO) spotted the gravitational waves emanating from the event, the space-based Fermi satellite spotted a gamma ray burst from the same location. The two signals had traveled across the universe for 130 million years before arriving at Earth just 1.7 seconds apart.
These nearly simultaneous observations “brutally and pitilessly murdered” TeVeS theories, said Paulo Freire, an astrophysicist at the Max Planck Institute for Radio Astronomy in Bonn, Germany. “Gravity and gravitational waves propagate at the speed of light, with extremely high precision — which is not at all what was predicted by those theories.”
The same fate overtook some Galileon theories that add an extra scalar field to explain the universe’s accelerated expansion. These also predict that gravitational waves propagate more slowly than light. The neutron-star merger killed those off too, Schmidt said.
Further limits come from new pulsar systems. In 2013, Archibald and her colleagues found an unusual triple system: a pulsar and a white dwarf that orbit one another, with a second white dwarf orbiting the pair. These three objects exist in a space smaller than Earth’s orbit around the sun. The tight setting, Archibald said, offers ideal conditions for testing a crucial aspect of general relativity called the strong equivalence principle, which states that very dense strong-gravity objects such as neutron stars or black holes “fall” in the same way when placed in a gravitational field. (On Earth, the more familiar weak equivalence principle states that, if we ignore air resistance, a feather and a brick will fall at the same rate.)
The triple system makes it possible to check whether the pulsar and the inner white dwarf fall exactly the same way in the gravity of the outer white dwarf. Alternative-gravity theories assume that the scalar field generated in the pulsar should bend space-time in a much more extreme way than the white dwarf does. The two wouldn’t fall in a similar manner, leading to a violation of the strong equivalence principle and, with it, general relativity.
Over the past five years, Archibald and her team have recorded 27,000 measurements of the pulsar’s position as it orbits the other two stars. While the project is still a work in progress, it looks as though the results will be in total agreement with Einstein, Archibald said. “We can say that the degree to which the pulsar behaves abnormally is at most a few parts in a million. For an object with such strong gravity to still follow Einstein’s predictions so well, if there is one of these scalar fields, it has to have a really tiny effect.”
The test, which should be published soon, will put the best constraints yet on a whole group of alternative gravity theories, she added. If a theory only works with some additional scalar field, then the field should change the behavior of the pulsar. “We have such sensitive tests of general relativity that they need to somehow hide the theory’s new behavior in the solar system and in pulsar systems like ours,” Archibald said.
The data from another pulsar system dubbed the double pulsar, meanwhile, was originally supposed to eliminate the TeVeS theories. Detected in 2003, the double pulsar was until recently the only binary neutron-star system where both neutron stars were pulsars. Freire and his colleagues have already confirmed that the double pulsar’s behavior is perfectly in line with general relativity. Right before LIGO’s October announcement of a neutron-star merger, the researchers were going to publish a paper that would kill off TeVeS. But LIGO did the job for them, Freire said. “We need not go through that anymore.”
A few theories have survived the LIGO blow — and will probably survive the upcoming pulsar data, Zumalacárregui said. There are some Horndeski and beyond-Horndeski theories that do not change the speed of gravitational waves. Then there are so-called massive gravity theories. Ordinarily, physicists assume that the particle associated with the force of gravity — the graviton — has no mass. In these theories, the graviton has a very small but nonzero mass. The neutron-star merger puts tough limits on these theories, Zumalacárregui said, since a massive graviton would travel more slowly than light. But in some theories the mass is assumed to be extremely small, at least 20 orders of magnitude lower than the neutrino’s, which means that the graviton would still move at nearly the speed of light.
There are a few other less well-known survivors, some of which are important to keep exploring, Archibald said, as long as dark matter and dark energy remain elusive. “Dark energy might be our only observational clue pointing to a new and better theory of gravity — or it might be a mysterious fluid with strange properties, and nothing to do with gravity at all,” she said.
Still, killing off theories is simply how science is supposed to work, argue researchers who have been exploring alternative gravity theories. “This is what we do all the time, put forward a working hypothesis and test it,” said Enrico Barausse of the Astrophysics Institute of Paris, who has worked on MOND-like theories. “99.9 percent of the time you rule out the hypothesis; the remaining 0.1 percent of the time you win the Nobel Prize.”
Zumalacárregui, who has also worked on these theories, was “sad at first” when he realized that the neutron star merger detection had proven Galileon theories wrong, but ultimately “very relieved it happened sooner rather than later,” he said. LIGO had been just about to close down for 18 months to upgrade the detector. “If the event had been a bit later, I would still be working on a wrong theory.”
So what’s next for general relativity and modified-gravity theories? “That question keeps me up at night more than I’d like,” Zumalacárregui said. “The good news is that we have narrowed our scope by a lot, and we can try to understand the few survivors much better.”
Schmidt thinks it’s necessary to measure the laws of gravity on large scales as directly as possible, using ongoing and future large galaxy surveys. “For example, we can compare the effect of gravity on light bending as well as galaxy velocities, typically predicted to be different in modified-gravity theories,” he said. Researchers also hope that future telescopes such as the Square Kilometer Array will discover more pulsar systems and provide better accuracy in pulsar timing to further improve gravity tests. And a space-based replacement for LIGO called LISA will study gravitational waves with exquisite accuracy — if indeed it launches as planned in the mid-2030s. “If that does not see any deviations from general relativity, I don’t know what will,” said Barausse.
But many physicists agree that it will take a long time to get rid of most alternative gravity models. Theorists have dozens of alternative gravity theories that could potentially explain dark matter and dark energy, Freire said. Some of these theories can’t make testable predictions, Archibald said, and many “have a parameter, a ‘knob’ you can turn to make them pass any test you like,” she said. But at some point, said Nicolas Yunes, a physicist at Montana State University, “this gets silly and Occam’s razor wins.”
Still, “fundamentally we know that general relativity is wrong,” Stein said. “At the very core there must be some breakdown” at the quantum level. “Maybe we won’t see it from astronomical observations … but we owe it to ourselves, as empirical scientists, to check whether or not our mathematical models are working at these scales.”
This article was reprinted on Wired.com.
How do notes work in Old Reader? I hope my brain doesn't die finding out. :(
After publishing an especially challenging quantum mechanics article, it's not uncommon to hear some of our readers complain that their head hurts. Presumably, they mean that the article gave them a (metaphorical) headache. But it is actually possible that challenging your brain does a bit of physical damage to the nerve cells of the brain. Researchers are reporting that, following situations where the brain is active, you might find signs of DNA damage within the cells there. The damage is normally restored quickly, but they hypothesize that the inability to repair it quickly enough may underlie some neurological diseases.
This research clearly started out as an attempt to understand Alzheimer's disease. The authors were working with mice that were genetically modified to mimic some of the mutations associated with early-onset forms of the disease in humans. As part of their testing, the team (based at UCSF) looked for signs of DNA damage in the brains of these animals. They generally found that the indications of damage went up when the brains of mice were active—specifically, after they were given a new environment to explore.
That might seem interesting on its own, but the surprise came when they looked at their control mice, which weren't at elevated risk of brain disorders. These mice also showed signs of DNA damage (although at slightly lower levels than the Alzheimer's-prone mice).