Shared posts

29 Jun 23:19

Coupling Through Emergent Conservation Laws (Part 1)

by John Baez

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

In the cell, chemical reactions are often ‘coupled’ so that reactions that release energy drive reactions that are biologically useful but involve an increase in energy. But how, exactly, does coupling work?

Much is known about this question, but the literature is also full of vague explanations and oversimplifications. Coupling cannot occur in equilibrium; it arises in open systems, where the concentrations of certain chemicals are held out of equilibrium due to flows in and out. One might thus suspect that the simplest mathematical treatment of this phenomenon would involve non-equilibrium steady states of open systems. However, Bazhin has shown that some crucial aspects of coupling arise in an even simpler framework:

• Nicolai Bazhin, The essence of ATP coupling, ISRN Biochemistry 2012 (2012), article 827604.

He considers ‘quasi-equilibrium’ states, where fast reactions have come into equilibrium and slow ones are neglected. He shows that coupling occurs already in this simple approximation.

In this series of blog articles we’ll do two things. First, we’ll review Bazhin’s work in a way that readers with no training in biology or chemistry should be able to follow. (But if you get stuck, ask questions!) Second, we’ll explain a fact that seems to have received insufficient attention: in many cases, coupling relies on emergent conservation laws.

Conservation laws are important throughout science. Besides those that are built into the fabric of physics, such as conservation of energy and momentum, there are also many ’emergent’ conservation laws that hold approximately in certain circumstances. Often these arise when processes that change a given quantity happen very slowly. For example, the most common isotope of uranium decays into lead with a half-life of about 4 billion years—but for the purposes of chemical experiments in the laboratory, it is useful to treat the amount of uranium as a conserved quantity.

The emergent conservation laws involved in biochemical coupling are of a different nature. Instead of making the processes that violate these laws happen more slowly, the cell uses enzymes to make other processes happen more quickly. At the time scales relevant to cellular metabolism, the fast processes dominate, while slowly changing quantities are effectively conserved. By a suitable choice of these emergent conserved quantities, the cell ensures that certain reactions that release energy can only occur when other ‘desired’ reactions occur. To be sure, this is only approximately true, on sufficiently short time scales. But this approximation is enlightening!

Following Bazhin, our main example involves ATP hydrolysis. We consider this following schema for a whole family of reactions:

\begin{array}{ccc}  \mathrm{X} + \mathrm{ATP}  & \longleftrightarrow & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}} \qquad (1) \\  \mathrm{XP}_{\mathrm{i}} + \mathrm{Y}  & \longleftrightarrow &    \mathrm{XY} + \mathrm{P}_{\mathrm{i}} \,\;\;\;\;\qquad (2)  \end{array}

Some concrete examples of this schema include:

• The synthesis of glutamine (XY) from glutamate (X) and ammonium (Y). This is part of the important glutamate-glutamine cycle in the central nervous system.

• The synthesis of sucrose (XY) from glucose (X) and fructose (Y). This is one of many processes whereby plants synthesize more complex sugars and starches from simpler building-blocks.

In these and other examples, the two reactions, taken together, have the effect of synthesizing a larger molecule XY out of two parts X and Y while ATP is broken down to ADP and the phosphate ion Pi Thus, they have the same net effect as this other pair of reactions:

\begin{array}{ccc}  \mathrm{X} + \mathrm{Y} &\longleftrightarrow & \mathrm{XY} \;\;\;\quad \quad \qquad  (3) \\   \mathrm{ATP} &\longleftrightarrow & \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} \qquad (4) \end{array}

The first reaction here is just the synthesis of XY from X and Y. The second is a deliberately simplified version of ATP hydrolysis. The first involves an increase of energy, while the second releases energy. But in the schema used in biology, these processes are ‘coupled’ so that ATP can only break down to ADP + Pi if X and Y combine to form XY.

As we shall see, this coupling crucially relies on a conserved quantity: the total number of Y molecules plus the total number of Pi ions is left unchanged by reactions (1) and (2). This fact is not a fundamental law of physics, nor even a general law of chemistry (such as conservation of phosphorus atoms). It is an emergent conservation law that holds approximately in special situations. Its approximate validity relies on the fact that the cell has enzymes that make reactions (1) and (2) occur more rapidly than reactions that violate this law, such as (3) and (4).

In the series to come, we’ll start by providing the tiny amount of chemistry and thermodynamics needed to understand what’s going on. Then we’ll raise the question “what is coupling?” Then we’ll study the reactions required for coupling ATP hydrolysis to the synthesis of XY from components X and Y, and explain why these reactions are not yet enough for coupling. Then we’ll show that coupling occurs in a ‘quasiequilibrium’ state where reactions (1) and (2), assumed much faster than the rest, have reached equilibrium, while the rest are neglected. And then we’ll explain the role of emergent conservation laws!

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.

29 Jun 15:37

Topological Phase Transitions in Spatial Networks. (arXiv:1806.10114v1 [physics.soc-ph])

by Paul Balister, Chaoming Song, Oliver Riordan, Bela Bollobas, Albert-Laszlo Barabasi

Most social, technological and biological networks are embedded in a finite dimensional space, and the distance between two nodes influences the likelihood that they link to each other. Indeed, in social systems, the chance that two individuals know each other drops rapidly with the distance between them; in the cell, proteins predominantly interact with proteins in the same cellular compartment; in the brain, neurons mainly link to nearby neurons. Most modeling frameworks that aim to capture the empirically observed degree distributions tend to ignore these spatial constraints. In contrast, models that account for the role of the physical distance often predict bounded degree distributions, in disagreement with the empirical data. Here we address a long-standing gap in the spatial network literature by deriving several key network characteristics of spatial networks, from the analytical form of the degree distribution to path lengths and local clustering. The mathematically exact results predict the existence of two distinct phases, each governed by a different dynamical equation, with distinct testable predictions. We use empirical data to offer direct evidence for the practical relevance of each of these phases in real networks, helping better characterize the properties of spatial networks.

27 Jun 04:04

You Might Have a 'Uniquely Compelling' Reason to Find Out Whether Your Government Has Placed You on a Kill List

by Brian Doherty

It's just possible, Judge Rosemary Collyer of the U.S. District Court for the District of Columbia Circuit concluded in a decision last week, that being a journalist in Syria placed on a kill list by your own government might constitute a violation of your First, Fourth, and Fifth Amendment rights.

The lawsuit started with Ahmad Muaffaq Zaidan and Bilal Abdul Kareem, two journalists from the Middle East, who often report on terrorism-related stories. Zaidan, who has worked for Al Jazeera for over 20 years, thinks the United States has labeled him as a terrorist, apparently because his work has him interacting with so many of them (Zaidan has interviewed Osama Bin Laden, among others).

Kareem, an American citizen and freelance reporter, has been at the site of five aerial bombings while working in Syria in one three-month period.

Both believe they might be on a secret U.S. government "kill list" and sued various government officials from President Trump on down last year to find out if they are.

Judge Collyer, allowing the lawsuit to proceed at least in part, wrote that their complaint asserted being on such a kill list would be "arbitrary, capricious and an abuse of discretion" and "violates the prohibition on conspiring to or assassinating any person abroad" and "violated due process because Plaintiffs were provided no notice and given no opportunity to challenge their inclusion."

Further, placing them on the kill list "violated the First Amendment because it 'has the effect of restricting and inhibiting their exercise of free speech and their ability to function as journalists entitled to freedom of the press.'"

Kareem, the citizen, asserts on his behalf that being on the kill list "violated the Fourth and Fifth Amendments because it constituted an illegal seizure and 'seeks to deprive [him] of life without due process of law.'"

The government claimed Zaidan and Kareem have no standing to sue and that this whole kill list thing is a "political question" outside the jurisdiction of the federal courts.

Judge Collyer disagreed, at least as applied to U.S. citizen Kareem. Collyer did agree that when it comes to foreigner Zaidan, who is unable to prove he was indeed on any kill list, "the Court finds no allegations in the Complaint that raise that possibility above mere speculation. Accordingly, the Court finds Mr. Zaidan has failed to allege a plausible injury-in-fact and therefore has no standing to sue."

But the legal situation for Kareem is different, the judge insisted. She noted that "two of the attacks [at or near Kareem] involved his place of work, one involved his own vehicle, one involved a work vehicle in which he had been traveling immediately before, and one hit a location from which he had just walked away."

The government insisted, well, Syria's a real violent place these days and lucky for him he hasn't been killed being surrounded by so much war. Kareem's problems, the government claimed, are not "attributable to anything more than a journalist reporting from a dangerous and active battlefield."

"While it is plausible that Mr. Kareem is not being targeted by the United States," Collyer wrote, "it is also plausible that Mr. Kareem's multiple near-miss incidents were caused by Defendants' decision to include him on the Kill List and were, therefore, caused by Defendants' actions."

Collyer was unimpressed by the government's argument that this is all military business and thus not subject to judicial second-guessing. The war aspect is irrelevant, the judge maintained, since the injury Kareem alleges is the fact that he was placed on a kill list back in D.C. "Mr. Kareem complains of an alleged decision to authorize a lethal strike against him and not a decision in the field to attempt to carry out that authorization. He wants the opportunity to persuade his government that he is not a terrorist or a threat so that the alleged authorization to kill is rescinded."

Collyer used that distinction to differentiate her decision from some precedents regarding drone attacks that were seen as more specifically about a judge's second-guessing of military decisions in the field. That's not what Kareem is trying to do here, Collyer concluded. "It remains a truism that judges are not good judges of military decisions during war. The immediate Complaint asks for no such non-judicial feat; rather, it alleges that placement on the Kill List occurs only after nomination by a defense agency principal and agreement by other such principals, with prior notice to the President. The persons alleged to have exercised this authority are alleged to have followed a known procedure that occurred in Washington or its environs."

Collyer did agree with the government that certain counts in the original suit should be dismissed, including, "whether Defendants complied with the Presidential Policy Guidance [for putting people on a kill list]," which "is a political question the Court must refrain from addressing" since the guidance itself is so vague that it "provides no test or standard that must be satisfied before the government may add an individual."

In other words, the kill list policy is so inherently arbitrary there is no way to procedurally abuse it.

Similarly, "the process of determining whether Defendants exceeded their authority or violated any of the statutes referenced in the Complaint would require the Court to make a finding on the propriety of the alleged action." But that, Collyer wrote, "is prohibited by the political question doctrine."

In other words, the court can't consider whether a government act was a good idea, merely whether it violated a specific law or constitutional provision.

Luckily for Kareem, and for the larger issue of justice in executive power, the judge reasoned that the whole kill list process might have "denied Mr. Kareem his rights to due process and the opportunity to be heard and deprived him of his First, Fourth, and Fifth Amendment rights."

As Collyer concluded in letting those aspects of Kareem's case move forward:

Mr. Kareem alleges that the Defendants targeted him for lethal force by putting his name on the Kill List, which he deduces from five near misses by drones or other military strikes. As a U.S. citizen, he seeks to clarify his status and profession to Defendants and, thereby, assert his right to due process and a prior opportunity to be heard. His interest in avoiding the erroneous deprivation of his life is uniquely compelling.

Mr. Kareem does not seek a ruling that a strike by the U.S. military was mistaken or improper. He seeks his birthright instead: a timely assertion of his due process rights under the Constitution to be heard before he might be included on the Kill List and his First Amendment rights to free speech before he might be targeted for lethal action due to his profession. The D.C. Circuit and the Supreme Court have previously held that a citizen "must have a meaningful opportunity to challenge the factual basis for his designation as an enemy combatant."

This does not mean Kareem has won his case, merely that the government has failed to have it thrown out of court. Collyer acknowledged that it is not yet settled fact whether Kareem even is on a kill list, but while "the Court finds that Mr. Kareem's allegations may be wrong as a matter of fact... Complaint presents them in a plausible manner."

Opposing drone strikes on U.S. citizens was the central point behind Sen. Rand Paul's (R-Ky.) reputation-making 2013 filibuster, and for good reason: There is nothing more tyrannical than the power to specifically target someone for murder absent any judicial proceedings, which, alas, is standard operating procedure for the U.S. government thanks to our endless and impossible Forever War on Terror.

26 Jun 18:07

Worldwide Refugee Population Hits All-Time High, U.S. Intake Reaches All-Time Low

by Matt Welch

||| Cheriss May/Sipa USA/NewscomToday is World Refugee Day, which is when the United Nations High Commissioner for Refugees (UNHCR) releases its grim annual Global Trends report about people driven from their homes, and the world's politicians issue grave-sounding statements about all the work they're doing to ameliorate the crisis.

So what did the UNHCR find for 2017? A record number of displaced people: 68.5 million. And a record number of refugees leaving their home country: 25.4 million, or 2.9 million more than 2016, making it "the biggest increase UNHCR has seen in a single year." There are currently "44,500 people being displaced each day, or a person becoming displaced every two seconds." The main generators of refugees are, in order, the wars in Syria, Colombia, the Democratic Republic of Congo, Afghanistan, and South Sudan.

Secretary of State Mike Pompeo commemorated the occasion with a statement asserting that "the United States will continue to be a world leader in providing humanitarian assistance and working to forge political solutions to the underlying conflicts that drive displacement," and that "the United States provides more humanitarian assistance than any other single country worldwide, including to refugees." That leadership, however, is not reflected in the number of refugees the U.S. now takes in.

From October 1, 2017 to June 15 of this year, America has brought in 15,383 refugees. That puts the country on pace to accept just under 22,000 for this fiscal year, which would easily be the lowest number since the Refugee Act of 1980. (In Fiscal Year 2002, which began right after the September 11 attacks, the George W. Bush administration took in 27,131). Measured across presidencies, Bush took in an average of 48,000 refugees per year, Barack Obama 70,000, Ronald Reagan 82,000, Bill Clinton 89,000, Jimmy Carter 94,000, and George H.W. Bush 119,000.

We are contracting admissions right as the world is dramatically expanding people seeking shelter outside their home countries. The global population of refugees (minus the 5.3 million registered with the U.N. Relief and Work Agency for Palestinians in the Near East), was stable between 2008–2012, at between 10.4 million and 10.6 million, but since then we've seen this:

2013: 11.7 million

2014: 14.4 million

2015: 16.1 million

2016: 17.2 million

2017: 20.1 million

The last time the world experienced such a sharp spike in refugees, the Carter and Reagan administrations took in about 1 out of every 70 global refugees. The Trump administration is on pace right now to accept 1 out of 900.

Pompeo in his statement nodded both to those prior eras of generosity, and Donald Trump's new era of America First stinginess: "Since 1975, the United States has accepted more than 3.3 million refugees for permanent resettlement—more than any other country in the world. The United States will continue to prioritize the admission of the most vulnerable refugees while upholding the safety and security of the American people."

Or as the president himself said Monday, "The United States will not be a migrant camp, and it will not be a refugee holding facility. Won't be. You look at what's happening in Europe, you look at what's happening in other places; we can't allow that to happen to the United States. Not on my watch."

Relevant video from the archives:

26 Jun 17:33

Lawsuit Claims Detained Migrant Children Have Been Forcibly Injected with Powerful Psychiatric Drugs

by mail@democracynow.org (Democracy Now!)
S4 immigrant children drugged

Shocking reports have revealed that immigrant children were subdued and incapacitated with powerful psychiatric drugs at a detention center in South Texas. Legal filings show that children held at Shiloh Treatment Center in southern Houston have been “forcibly injected with medications that make them dizzy, listless, obese and even incapacitated,” according to reports by Reveal. Meanwhile, according to another Reveal investigation, taxpayers have paid more than $1.5 billion over the past four years to companies operating immigration youth facilities despite facing accusations of rampant sexual and physical abuse. For more, we speak with the reporter who broke these stories: Aura Bogado. She is an immigration reporter with Reveal from the Center for Investigative Reporting. Her latest stories are “Immigrant children forcibly injected with drugs, lawsuit claims” and “Migrant children sent to shelters with histories of abuse allegations.”

26 Jun 17:22

The origins of WEIRD psychology

by Tyler Cowen

This is one of the most important topics, right?  Well, here is a new and quite thorough paper by Jonathan Schulz, Duman Bahrami-Rad, Jonathan Beauchamp, and Joseph Henrich.  Here is the abstract:

Recent research not only confirms the existence of substantial psychological variation around the globe but also highlights the peculiarity of populations that are Western, Educated, Industrialized, Rich and Democratic (WEIRD). We propose that much of this variation arose as people psychologically adapted to differing kin-based institutions—the set of social norms governing descent, marriage, residence and related domains. We further propose that part of the variation in these institutions arose historically from the Catholic Church’s marriage and family policies, which contributed to the dissolution of Europe’s traditional kin-based institutions, leading eventually to the predominance of nuclear families and impersonal institutions. By combining data on 20 psychological outcomes with historical measures of both kinship and Church exposure, we find support for these ideas in a comprehensive array of analyses across countries, among European regions and between individuals with different cultural backgrounds.

As you might expect, a paper like this is fairly qualitative by its nature, and this one will not convince everybody.  Who can separate out all those causal pathways?  Even in a paper that is basically a short book.

Object all you want, but there is some chance that this is one of the half dozen most important social science and/or history papers ever written.  So maybe a few of you should read it.

And the print in the references to the supplementary materials is small, so maybe I missed it, but I don’t think there is any citation to Steve Sailer, who has been pushing a version of this idea for many years.

The post The origins of WEIRD psychology appeared first on Marginal REVOLUTION.

26 Jun 17:20

What is the role of statistics in a machine-learning world?

by Andrew

I just happened to come across this quote from Dan Simpson:

When the signal-to-noise ratio is high, modern machine learning methods trounce classical statistical methods when it comes to prediction. The role of statistics in this case is really to boost the signal-to-noise ratio through the understanding of things like experimental design.

The post What is the role of statistics in a machine-learning world? appeared first on Statistical Modeling, Causal Inference, and Social Science.

26 Jun 16:47

Supreme Court Rules 5-4 in Favor of Trump’s Travel Ban

by Damon Root
Nosimpler

Ugh.

A closely divided U.S. Supreme Court has ruled in favor of President Donald Trump's executive proclamation banning travelers from certain largely majority-Muslim countries. "Because there is persuasive evidence that the entry suspension has a legitimate grounding in national security concerns, quite apart from any religious hostility," declared the majority opinion of Chief Justice John Roberts in Trump v. Hawaii, "we must accept that independent justification." This decision reverses a lower court ruling that had blocked the travel ban from going into effect.

At the center of the case is Trump's September 2017 "Proclamation No. 9645, Enhancing Vetting Capabilities and Processes for Detecting Attempted Entry Into the United States by Terrorists or Other Public-Safety Threats." At issue before the justices was whether this proclamation represented an invalid exercise of federal immigration power and also whether it violated the First Amendment's Establishment Clause by heaping official disfavor on a religious minority, particularly when the proclamation is viewed in light of Trump's long record of making anti-Muslim statements.

Chief Justice John Roberts, joined by Justices Anthony Kennedy, Clarence Thomas, Samuel Alito, and Neil Gorsuch, ruled in Trump's favor on both counts.

"By its plain language," the chief justice wrote, federal immigration law "grants the President broad discretion to suspend the entry of aliens into the United States. The President lawfully exercised that discretion based on his findings—following a worldwide, multi-agency review—that entry of the covered aliens would be detrimental to the national interest."

Roberts then had this to say about the Establishment Clause challenge:

Plaintiffs argue that this President's words strike at fundamental standards of respect and tolerance, in violation of our constitutional tradition. But the issue before us is not whether to denounce the statements. It is instead the significance of those statements in reviewing a Presidential directive, neutral on its face, addressing a matter within the core of executive responsibility. In doing so, we must consider not only the statements of a particular President, but also the authority of the Presidency itself.

Writing in dissent, Justice Stephen Breyer, joined by Justice Elena Kagan, argued that the Court should not have decided the case until it had the opportunity to hear additional arguments about the real-world implementation of the travel ban, particularly on how its "exemption and waiver" process is actually functioning. "If this Court must decide the question without this further litigation," Breyer wrote, "I would, on balance, find the evidence of antireligious bias."

In a separate dissent, Justice Sonia Sotomayor, joined by Justice Ruth Bader Ginsburg, charged the majority with turning a blind eye to the president's blatant Establishment Clause violation. The Court "leaves undisturbed a policy first advertised openly and unequivocally as a 'total and complete shutdown of Muslims entering the United States' because the policy now masquerades behind a facade of national-security concerns," Sotomayor wrote. "Based on the evidence in the record, a reasonable observer would conclude that the Proclamation was motivated by anti-Muslim animus. That alone suffices to show that plaintiffs are likely to succeed on the merits of their Establishment Clause claim."

At its heart, this case was about how much deference the federal courts owe to the executive branch when the executive is acting in the name of national security. According to the Court's 5-4 ruling, the executive is entitled to significant deference in such matters. "The Government has set forth a sufficient national security justification to survive rational basis review," wrote Chief Justice Roberts. "We express no view on the soundness of the policy."

The Supreme Court's opinion in Trump v. Hawaii is available here.

24 Jun 23:02

Huge Win for Everyone With a Cellphone (and for the Fourth Amendment) at the Supreme Court

by Damon Root

In a blockbuster 5-4 decision issued today, the U.S. Supreme Court ruled that warrantless government tracking of cellphone users via their cellphone location records violates the Fourth Amendment. "A person does not surrender all Fourth Amendment protection by venturing into the public sphere," declared the majority opinion of Chief Justice John Roberts. "We decline to grant the state unrestricted access to a wireless carrier's database of physical location information."

The case is Carpenter v. United States. It arose after the after FBI obtained, without a search warrant, the cellphone records of a suspected armed robber named Timothy Carpenter. With those records, law enforcement officials identified the cell towers that handled his calls and then proceeded to trace back his whereabouts during the time periods in which his alleged crimes were committed. That information was used against Carpenter in court.

The central issue in the case was whether Carpenter had a "reasonable expectation of privacy" in the information contained in those records, or whether he had forfeited such privacy protections by voluntarily sharing the information with his cellular service provider. As the Supreme Court put it in United States v. Miller (1976) and Smith v. Maryland (1979), "a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties."

In his ruling today, Chief Justice Roberts "decline[d] to extend Smith and Miller to cover these novel circumstances. Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user's claim to Fourth Amendment protection." He continued: "Whether the Government employs its own surveillance technology…or leverages the technology of a wireless carrier, we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through [cell site location information]."

Roberts' opinion was joined by Justices Ruth Bader Ginsburg, Stephen Breyer, Sonia Sotomayor, and Elena Kagan. Justice Anthony Kennedy filed a dissent, joined by Justices Clarence Thomas and Samuel Alito. Alito also filed a dissent, which Thomas joined. Thomas also filed a dissent of his own. Justice Neil Gorsuch dissented alone too.

Kennedy, joined by Thomas and Alito, complained that "the Court's stark departure from relevant Fourth Amendment precedents and principles…places undue restrictions on the lawful and necessary enforcement powers exercised not only by the Federal Government, but also by law enforcement in every State and locality throughout the Nation." In their view, the Court should have followed its precedents in Miller and Smith and held that "individuals have no Fourth Amendment interests in business records which are possessed, owned, and controlled by a third party." Cellphone records, they maintain, "are no different from the many other kinds of business records the Government has a lawful right to obtain by compulsory process."

Justice Neil Gorsuch struck an entirely different note in his lone dissent. Indeed, his dissent reads much more like a concurrence. It seems clear that while Gorsuch agreed with the majority that Carpenter deserved to win, he strongly disagreed with them about how the win should have happened.

"I would look to a more traditional Fourth Amendment approach," Gorsuch wrote. "The Fourth Amendment protects 'the right of the people to be secure in their persons, houses, papers and effects, against unreasonable searches and seizures.' True to those words and their original understanding, the traditional approach asked if a house, paper or effect was yours under law. No more was needed to trigger the Fourth Amendment." Furthermore, Gorsuch wrote, "it seems to me entirely possible a person's cell-site data could qualify as his papers or effects under existing law."

"I cannot fault" the majority "for its implicit but unmistakable conclusion that the rationale of Smith and Miller is wrong; indeed, I agree with that," Gorsuch explained. "At the same time, I do not agree with the Court's decision today to keep Smith and Miller on life support." In other words, Gorsuch would scrap these third-party precedents and have the Court start adhering to an originalist, property rights-based theory of the Fourth Amendment. That's how Gorsuch wanted Carpenter to win.

The importance of today's ruling in Carpenter v. U.S. should not be underestimated. Both the majority opinion and Gorsuch's dissent raise questions about the future viability of two key Fourth Amendment precedents. What is more, the decision itself represents a massive win for Fourth Amendment advocates. Carpenter may well be remembered as the most significant decision issued this term.

21 Jun 17:03

Hidden Quantum Processes, Quantum Ion Channels, and 1/fθ-Type Noise

by Alan Paris
Neural Computation, Volume 30, Issue 7, Page 1830-1929, July 2018.
20 Jun 17:23

Active Growth and Pattern Formation in Membrane-Protein Systems

by F. Cagnetta, M. R. Evans, and D. Marenduzzo

Author(s): F. Cagnetta, M. R. Evans, and D. Marenduzzo

A new statistical model predicts the evolving shape of a cellular membrane by accounting for the active feedback between the membrane and attached proteins.


[Phys. Rev. Lett. 120, 258001] Published Mon Jun 18, 2018

09 Jun 17:39

MiniBooNE

by John Baez

Big news! An experiment called MiniBooNE at Fermilab in Chicago has found more evidence that neutrinos are not acting as the Standard Model says they should:

• The MiniBooNE Collaboration, Observation of a significant excess of electron-like events in the MiniBooNE short-baseline neutrino experiment.

In brief, the experiment creates a beam of muon neutrinos (or antineutrinos—they can do either one). Then they check, with a detector 541 meters away, to see if any of these particles have turned into electron neutrinos (or antineutrinos). They’ve been doing this since 2002, and they’ve found a small tendency for this to happen.

This seems to confirm findings of the Liquid Scintillator Neutrino Detector or ‘LSND’ at Los Alamos, which did a similar experiment in the 1990s. People in the MiniBooNE collaboration claim that if you take both experiments into account, the results have a statistical significance of 6.1 σ.

This means that if the Standard Model is correct and there’s no experimental error or other mistake, the chance of seeing what these experiments saw is about 1 in 1,000,000,000.

There are 3 known kinds of neutrinos: electron, muon and tau neutrinos. Neutrinos of any kind are already known to turn into those of other kinds: these are called neutrino oscillations, and they were first discovered in the 1960’s, when it was found that 1/3 as many electron neutrinos were coming from the Sun as expected.

At the time this was a big surprise, because people thought neutrinos were massless, moved at the speed of light, and thus didn’t experience the passage of time. Back then, the Standard Model looked like this:

The neutrinos stood out as weird in two ways: we thought they were massless, and we thought they only come in a left-handed form—meaning roughly that they spin clockwise around the axis they’re moving along.

People did a bunch of experiments and wound up changing the Standard Model. Now we know neutrinos have nonzero mass. Their masses, and also neutrino oscillations, are described using a 3×3 matrix called the lepton mixing matrix. This is not a wacky idea: in fact, quarks are described using a similar 3×3 matrix called the quark mixing matrix. So, the current-day Standard Model is more symmetrical than the earlier version: leptons are more like quarks.

There is, however, still a big difference! We haven’t seen right-handed neutrinos.

MiniBooNE and LSND are seeing muon neutrinos turn into electron neutrinos much faster than the Standard Model theory of neutrino oscillations predicts. There seems to be no way to adjust the parameters of the lepton mixing matrix to fit the data from all the other experiments people have done, and also the MiniBooNE–LSND data. If this is really true, we need a new theory of physics.

And this is where things get interesting.

The most conservative change to the Standard Model would be to add three right-handed neutrinos to go along with the three left-handed ones. This would not be an ugly ad hoc trick: it would make the Standard Model more symmetrical, by making leptons even more like quarks.

If we do this in the most beautiful way—making leptons as similar to quarks as we can get away with, given their obvious differences—the three new right-handed neutrinos will be ‘sterile’. This means that they will interact only with the Higgs boson and gravity: not electromagnetism, the weak force or the strong force. This is great, because it would mean there’s a darned good reason we haven’t seen them yet!

Neutrinos are already very hard to detect, since they don’t interact with electromagnetism or the strong force. They only interact with the Higgs boson (that’s what creates their mass, and oscillations), gravity (because they have energy), and the weak force (which is how we create and detect them). A ‘sterile’ neutrino—one that also didn’t interact with the weak force—would be truly elusive!

In practice, the main way to detect sterile neutrinos would be via oscillations. We could create an ordinary neutrino, and it might turn into a sterile neutrino, and then back into an ordinary neutrino. This would create new kinds of oscillations.

And indeed, MiniBooNE and LSND seem to be seeing new oscillations, much more rapid than those predicted by the Standard Model and our usual best estimate of the lepton mixing matrix.

So, people are getting excited! We may have found sterile neutrinos.

There’s a lot more to say. For example, the SO(10) grand unified theory predicts right-handed neutrinos in a very beautiful way, so I’m curious about what the new data implies about that. There are also questions about whether a sterile neutrino could explain dark matter… or what limits astronomical observations place on the properties of sterile neutrinos. One should also wonder about the possibility of experimental error!

I would enjoy questions that probe deeper into this subject, since they might force me to study and learn more. Right now I have to go to Joshua Tree! But I’ll come back and answer your questions tomorrow morning.





09 Jun 03:16

Role of Symmetry in Irrational Choice. (arXiv:1806.02627v3 [physics.pop-ph] UPDATED)

by Ivan Kozic

Symmetry is a fundamental concept in modern physics and other related sciences. Being such a powerful tool, almost all physical theories can be derived from symmetry, and the effectiveness of such an approach is astonishing. Since many physicists do not actually believe that symmetry is a fundamental feature of nature, it seems more likely it is a fundamental feature of human cognition. According to evolutionary psychologists, humans have a sensory bias for symmetry. The unconscious quest for symmetrical patterns has developed as a solution to specific adaptive problems related to survival and reproduction. Therefore, it comes as no surprise that some fundamental concepts in psychology and behavioral economics necessarily involve symmetry. The purpose of this paper is to draw attention to the role of symmetry in decision-making and to illustrate how it can be algebraically operationalized through the use of mathematical group theory.

08 Jun 22:04

Applied Category Theory: Resource Theories

by john
MathML-enabled post (click for more details).

My course on applied category theory is continuing! After a two-week break where the students did exercises, I went back to lecturing about Fong and Spivak’s book Seven Sketches. The second chapter is about ‘resource theories’.

MathML-enabled post (click for more details).

Resource theories help us answer questions like this:

  1. Given what I have, is it possible to get what I want?
  2. Given what I have, how much will it cost to get what I want?
  3. Given what I have, how long will it take to get what I want?
  4. Given what I have, what is the set of ways to get what I want?

Resource theories in their modern form were arguably born in these papers:

We are lucky to have Tobias in our course, helping the discussions along! He’s already posted some articles on resource theory on the Azimuth blog:

In the course, we had fun bouncing between the relatively abstract world of monoidal preorders and their very concrete real-world applications to chemistry, scheduling, manufacturing and other topics. Here are the lectures:

08 Jun 21:53

Money for nothing: the truth about universal basic income

by Carrie Arnold

Money for nothing: the truth about universal basic income

Money for nothing: the truth about universal basic income, Published online: 30 May 2018; doi:10.1038/d41586-018-05259-x

Several projects are testing the idea of doling out funds that people can use however they want.
08 Jun 21:24

Experimental evidence for tipping points in social convention

by Centola, D., Becker, J., Brackbill, D., Baronchelli, A.

Theoretical models of critical mass have shown how minority groups can initiate social change dynamics in the emergence of new social conventions. Here, we study an artificial system of social conventions in which human subjects interact to establish a new coordination equilibrium. The findings provide direct empirical demonstration of the existence of a tipping point in the dynamics of changing social conventions. When minority groups reached the critical mass—that is, the critical group size for initiating social change—they were consistently able to overturn the established behavior. The size of the required critical mass is expected to vary based on theoretically identifiable features of a social setting. Our results show that the theoretically predicted dynamics of critical mass do in fact emerge as expected within an empirical system of social coordination.

08 Jun 21:24

Numerical ordering of zero in honey bees

by Howard, S. R., Avargues-Weber, A., Garcia, J. E., Greentree, A. D., Dyer, A. G.

Some vertebrates demonstrate complex numerosity concepts—including addition, sequential ordering of numbers, or even the concept of zero—but whether an insect can develop an understanding for such concepts remains unknown. We trained individual honey bees to the numerical concepts of "greater than" or "less than" using stimuli containing one to six elemental features. Bees could subsequently extrapolate the concept of less than to order zero numerosity at the lower end of the numerical continuum. Bees demonstrated an understanding that parallels animals such as the African grey parrot, nonhuman primates, and even preschool children.

29 May 22:28

Unless You Don't Program Them To Do That

by noreply@blogger.com (Atrios)
I've long said safety isn't the real issue with self-driving cars, in that if they work they'll be safe enough, and that programming them not to hit things has to be the bare minimum easiest thing to do. Even this isn't *that* easy as there is a bit of a problem at high speeds. They don't actually see that far ahead at the moment. Still. "If see object, brake or turn." Not hard.

Unless, of course, you don't tell them to do that.

Uber’s vehicle used Volvo software to detect external objects. Six seconds before striking Herzberg, the system detected her but didn’t identify her as a person. The car was traveling at 43 mph.

The system determined 1.3 seconds before the crash that emergency braking would be needed to avert a collision. But the vehicle did not respond, striking Herzberg at 39 mph.


And why was that? Oh.

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

There's a lot of chatter about where exactly the civil liability is going to fall for these things. What about the criminal liability?
23 May 17:56

Effective thermodynamics for a marginal observer

by tomate

Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!”. Obviously he must be a crackpot. But he may be well-intentioned. You opt for not being rude, roll your sleeves, and put your hands into the dirt, holding the Second Law as lodestar.

Keep in mind that there are two fundamental sources of error: either he is not considering certain input currents (“hey, what about that tiny hidden cable entering your machine from the electrical power line?!”, “uh, ah, that’s just to power the “ON” LED”, “mmmhh, you sure?”), or else he is not measuring the energy input correctly (“hey, why are you using a Geiger counter to measure input voltages?!”, “well, sir, I ran out of voltmeters…”).

In other words, the observer might only have partial information about the setup, either in quantity or quality, because he has been marginalized by society (most crackpots believe they are misunderstood geniuses). Therefore we will call such observer “marginal”, which incidentally is also the word that mathematicians use when they focus on the probability of a subset of stochastic variables… In fact, our modern understanding of thermodynamics as embodied in statistical mechanics and stochastic processes is founded (and funded) on ignorance: we never really have “complete” information.
If we actually had, all energy would look alike, it would not come in “more refined” and “less refined” forms, there would not be a differentials of order/disorder (using Paul Valery’s beautiful words), and that would end thermodynamic reasoning, the energy problem, and generous research grants altogether.

Even worse, within this statistical approach we might be missing chunks of information because some parts of the system are invisible to us. But then, what warrants that  we are doing things right, and he (our correspondent) is the crackpot? Couldn’t it be the other way around? Here I would like to present some recent ideas I’ve been working on together with some collaborators on how to deal with incomplete information about the sources of dissipation of a thermodynamic system. I will do this in a quite theoretical manner, but somehow I will mimic the guidelines suggested above for debunking crackpots. My three buzzwords will be: marginal, effective, and operational.

“COMPLETE” THERMODYNAMICS: AN OUT-OF-THE-BOX VIEW

The laws of thermodynamics that I address are:

  • The good ol’ Second Law (2nd)
  • The Fluctuation-Dissipation Relation (FDR), and the Reciprocal Relation (RR) close to equilibrium
  • The more recent Fluctuation Relation (FR)1 and its corollary the Integral FR (IFR), that have been discussed on this blog in a remarkable post by Matteo Smerlak.

The list above is all in the “area of the second law”. How about the other laws? Well, thermodynamics has for long been a phenomenological science, a patchwork.  So-called Stochastic Thermodynamics is trying to put some order in it by systematically grounding thermodynamic claims in (mostly Markov) stochastic processes. But it’s not an easy task, because the different laws of thermodynamics live in somewhat different conceptual planes. And it’s not even clear if they are theorems, prescriptions, habits (a bit like in jurisprudence…2). Within Stochastic Thermodynamics, the Zeroth Law is so easy nobody cares to formulate it (I do, so stay tuned…). The Third Law: no idea, let me know. As regards the First Law (or, better, “laws”, as many as there are conserved quantities across the system/environment interface…), we will assume that all related symmetries have been exploited from the offset to boil down the description to a minimum.

1

This minimum is as follows. We identify a system that is well separated from its environment. The system evolves in time, the environment is so large that its state does not evolve within the timescales of the system3. When tracing out the environment from the description, an uncertainty falls upon the system’s evolution. We assume the system’s dynamics to be described by a stochastic Markovian process.

How exactly the system evolves and what is the relationship between system and environment will be described in more detail below. Here let us take an “out of the box” view. We resolve the environment into several reservoirs labeled by index \alpha. Each of these reservoirs is “at equilibrium” on its own (whatever that means… 4). Now, the idea is that each reservoir tries to impose “its own equilibrium” on the system, and that their competition leads to a flow of currents across the system/environment interface. Each time an amount of the reservoir’s resource crosses the interface, a “thermodynamic cost” has to be to be paid or gained (be it a chemical potential difference for a molecule to go through a membrane, or a temperature gradient for photons to be emitted/absorbed, etc.).

The fundamental quantities of stochastic thermo-dynamic modeling thus are:

  • On the “-dynamic” side: the time-integrated currents \Phi^t_\alpha, independent among themselves5. Currents are stochastic variables distributed with joint probability density

P(\{\Phi_\alpha\}_\alpha)

  • On the “thermo-” side: The so-called thermodynamic forces or “affinities”6 \mathcal{A}_\alpha  (collectively denoted \mathcal{A}). These are tunable parameters that characterize reservoir-to-reservoir gradients, and they are not stochastic. For convenience, we conventionally take them all positive.

Dissipation is quantified by the entropy production:

\sum \mathcal{A}_\alpha \Phi^t_\alpha

We are finally in the position to state the main results. Be warned that in the following expressions the exact treatment of time and its scaling would require a lot of specifications, but keep in mind that all these relations hold true in the long-time limit, and that all cumulants scale linearly with time.

  • FR: The probability of observing positive currents is exponentially favoured with respect to negative currents according to

P(\{\Phi_\alpha\}_\alpha) / P(\{-\Phi_\alpha\}_\alpha) = \exp \sum \mathcal{A}_\alpha \Phi^t_\alpha

Comment: This is not trivial, it follows from the explicit expression of the path-integral, see below.

  • IFR: The exponential of minus the entropy production is unity

\big\langle  \exp - \sum \mathcal{A}_\alpha \Phi^t_\alpha  \big\rangle_{\mathcal{A}} =1

Homework: Derive this relation from the FR in one line.

  • 2nd Law: The average entropy production is not negative

\sum \mathcal{A}_\alpha \left\langle \Phi^t_\alpha \right\rangle_{\mathcal{A}} \geq 0

Homework: Derive this relation using Jensen’s inequality.

  • Equilibrium: Average currents vanish if and only if affinities vanish:

\left\langle \Phi^t_\alpha \right\rangle_{\mathcal{A}} \equiv 0, \forall \alpha \iff  \mathcal{A}_\alpha \equiv 0, \forall \alpha

Homework: Derive this relation taking the first derivative w.r.t.  {\mathcal{A}_\alpha} of the IFR. Notice that also the average depends on the affinities.

  • S-FDR: At equilibrium, it is impossible to tell whether a current is due to a spontaneous fluctuation (quantified by its variance) or to an external perturbation (quantified by the response of its mean). In a symmetrized (S-) version:

\left.  \frac{\partial}{\partial \mathcal{A}_\alpha}\left\langle \Phi^t_{\alpha'} \right\rangle \right|_{0} + \left.  \frac{\partial}{\partial \mathcal{A}_{\alpha'}}\left\langle \Phi^t_{\alpha} \right\rangle \right|_{0} = \left. \left\langle \Phi^t_{\alpha} \Phi^t_{\alpha'} \right\rangle \right|_{0}

Homework: Derive this relation taking the mixed second derivatives w.r.t.  {\mathcal{A}_\alpha} of the IFR.

  • RR: The reciprocal response of two different currents to a perturbation of the reciprocal affinities close to equilibrium is symmetrical:

\left.  \frac{\partial}{\partial \mathcal{A}_\alpha}\left\langle \Phi^t_{\alpha'} \right\rangle \right|_{0} - \left.  \frac{\partial}{\partial \mathcal{A}_{\alpha'}}\left\langle \Phi^t_{\alpha} \right\rangle \right|_{0} = 0

Homework: Derive this relation taking the mixed second derivatives w.r.t.  {\mathcal{A}_\alpha} of the FR.

Notice the implication scheme: FR => IFR => 2nd, IFR => S-FDR, FR => RR.

“MARGINAL” THERMODYNAMICS (STILL OUT-OF-THE-BOX)

Now we assume that we can only measure a marginal subset of currents \{\Phi_\mu^t\}_\mu \subset \{\Phi_\alpha^t\}_\alpha (index \mu always has a smaller range than \alpha), distributed with joint marginal probability

P(\{\Phi_\mu\}_\mu) = \int \prod_{\alpha \neq \mu} d\Phi_\alpha \, P(\{\Phi_\alpha\}_\alpha)

2

Notice that a state where these marginal currents vanish might not be an equilibrium, because other currents might still be whirling around. We call this a stalling state.

\mathrm{stalling:} \qquad \langle \Phi_\mu \rangle \equiv 0,  \quad \forall \mu

My central question is: can we associate to these currents some effective affinity \mathcal{Q}_\mu in such a way that at least some of the results above still hold true? And, are all definitions involved just a fancy mathematical construct, or are them operational?

First the bad news: In general the FR is violated for all choices of effective affinities:

P(\{\Phi_\mu\}_\mu) / P(\{-\Phi_\mu\}_\mu) \neq \exp \sum \mathcal{Q}_\mu \Phi^t_\mu

This is not surprising and nobody would expect that. How about the IFR?

  • Marginal IFR: There are effective affinities such that

\left\langle \exp - \sum \mathcal{Q}_\mu \Phi^t_\mu \right\rangle_{\mathcal{A}} =1

Mmmhh. Yeah. Take a closer look this expression: can you see why there actually exists an infinite choice of “effective affinities” that would make that average cross 1? Which on the other hand is just a number, so who even cares? So this can’t be the point.

Fact is the IFR per se is hardly of any practical interest, as are all “asbolutes” in physics. What matters is “relatives”: in our case, response. But then we need to specify how the effective affinities depend on the “real” affinities. And here steps in a crucial technicality, whose precise argumentation is a pain. Basing on reasonable assumptions7, we demonstrate that the IFR holds for the following choice of effective affinities:

\mathcal{Q}_\mu = \mathcal{A}_\mu - \mathcal{A}^{\mathrm{stalling}}_\mu,

where \mathcal{A}^{\mathrm{stalling}} is the set of values of the affinities that make marginal currents stall. Notice that this latter formula gives an operational definition of the effective affinities that could in principle be reproduced in laboratory (just go out there and tune the tunable until everything stalls, and measure the difference). Obvsiously:

  • Stalling : Marginal currents vanish  if and only if effective affinities vanish:

\left\langle \Phi^t_\mu \right\rangle_{\mathcal{A}} \equiv 0, \forall \mu \iff \mathcal{A}_\mu \equiv 0, \forall \mu

Now, according to the inference scheme illustrated above, we can also prove that:

  •  Effective 2nd Law: The average marginal entropy production is not negative

\sum \mathcal{Q}_\mu \left\langle \Phi^t_\mu \right\rangle_{\mathcal{A}} \geq 0

  • S-FDR at stalling:

\left. \frac{\partial}{\partial \mathcal{A}_\mu}\left\langle \Phi^t_{\mu'} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}} + \left. \frac{\partial}{\partial \mathcal{A}_{\mu'}}\left\langle \Phi^t_{\mu} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}} = \left. \left\langle \Phi^t_{\mu} \Phi^t_{\mu'} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}}

Notice instead that the RR is gone at stalling. This is a clear-cut prediction of the theory that can be experimented with basically the same apparatuses with which response theory has been experimented so far (not that I actually know what these apparatuses are…): at stalling states, differing from equilibrium states, the S-FDR still holds, but the RR does not.

INTO THE BOX

You definitely got enough of it at this point, and you can give up here. Please
exit through the gift shop.

If you’re stubborn, let me tell you what’s inside the box. The system’s dynamics is modeled as a continuous-time, discrete configuration-space Markov “jump” process. The state space can be described by a graph G=(I, E) where I is the set of configurations, E is the set of possible transitions or “edges”, and there exists some incidence relation between edges and couples of configurations. The process is determined by the rates w_{i \gets j} of jumping from one configuration to another.

We choose these processes because they allow some nice network analysis and because the path integral is well defined! A single realization of such a process is a trajectory

\omega^t = (i_0,\tau_0) \to (i_1,\tau_1) \to \ldots \to (i_N,\tau_N)

A “Markovian jumper” waits at some configuration i_n for some time \tau_n with an exponentially decaying probability w_{i_n} \exp - w_{i_n} \tau_n with exit rate w_i = \sum_k w_{k \gets i}, then instantaneously jumps to a new configuration i_{n+1} with transition probability w_{i_{n+1} \gets {i_n}}/w_{i_n}. The overall probability density of a single trajectory is given by

P(\omega^t) = \delta \left(t - \sum_n \tau_n \right) e^{- w_{i_N}\tau_{i_N}} \prod_{n=0}^{N-1} w_{j_n \gets i_n} e^{- w_{i_n} \tau_{i_n}}

One can in principle obtain the p.d.f. of any observable defined along the trajectory by taking the marginal of this measure (though in most cases this is technically impossible). Where does this expression come from? For a formal derivation, see the very beautiful review paper by Weber and Frey, but be aware that this is what one would intuitively come up with if he had to simulate with the Gillespie algorithm.

The dynamics of the Markov process can also be described by the probability of being at some configuration i at time t, which evolves with the master equation

\dot{p}_i(t) = \sum_j \left[ w_{ij} p_j(t) - w_{ji} p_i(t) \right].

We call such probability the system’s state, and we assume that the system relaxes to a uniquely defined steady state p = \mathrm{lim}_{t \to \infty} p(t).

A time-integrated current along a single trajectory is a linear combination of the net number of jumps \#^t between configurations in the network:

\Phi^t_\alpha = \sum_{ij} C^{ij}_\alpha \left[ \#^t(i \gets j) - \#^t(j\gets i) \right]

The idea here is that one or several transitions within the system occur because of the “absorption” or the “emission” of some environmental degrees of freedom, each with different intensity. However, for the moment let us simplify the picture and require that only one transition contributes to a current, that is that there exist i_\alpha,j_\alpha such that

C^{ij}_\alpha = \delta^i_{i_\alpha} \delta^j_{j_\alpha}.

Now, what does it mean for such a set of currents to be “complete”? Here we get inspiration from Kirchhoff’s Current Law in electrical circuits: the continuity of the trajectory at each configuration of the network implies that after a sufficiently long time, cycle or loop or mesh currents completely describe the steady state. There is a standard procedure to identify a set of cycle currents: take a spanning tree T of the network; then the currents flowing along the edges E\setminus T left out from the spanning tree form a complete set.

The last ingredient you need to know are the affinities. They can be constructed as follows. Consider the Markov process on the network where the observable edges are removed G' = (I,T). Calculate the steady state of its associated master equation (p^{\mathrm{eq}}_i)_i, which is necessarily an equilibrium (since there cannot be cycle currents in a tree…). Then the affinities are given by

\mathcal{A}_\alpha = \log  w_{i_\alpha j_\alpha} p^{\mathrm{eq}}_{j_\alpha} / w_{j_\alpha i_\alpha} p^{\mathrm{eq}}_{i_\alpha}.

Now you have all that is needed to formulate the complete theory and prove the FR.

Homework: (Difficult!) With the above definitions, prove the FR.

How about the marginal theory? To define the effective affinities, take the set E_{\mathrm{mar}} = \{i_\mu j_\mu, \forall \mu\} of edges where there run observable currents. Notice that now its complement obtained by removing the observable edges, that we call the hidden edge set E_{\mathrm{hid}} = E \setminus E_{\mathrm{mar}}, is not in general a spanning tree: there might be cycles that are not accounted for by our observations. However, we can still consider the Markov process on the hidden space, and calculate its stalling steady state p^{\mathrm{st}}_i, and ta-taaa: The effective affinities are given by

\mathcal{Q}_\mu = \log w_{i_\mu j_\mu} p^{\mathrm{st}}_{j_\mu} / w_{j_\mu i_\mu} p^{\mathrm{st}}_{i_\mu}.

Proving the marginal IFR is far more complicated than the complete FR. In fact, very often in my field we will not work with the current’ probability density itself,  but we prefer to take its bidirectional Laplace transform and work with the currents’ cumulant generating function. There things take a quite different and more elegant look.

Many other questions and possibilities open up now. The most important one left open is: Can we generalize the theory the (physically relevant) case where the current is supported on several edges? For example, for a current defined like \Phi^t = 5 \Phi^t_{12} + 7 \Phi^t_{34}? Well, it depends: the theory holds provided that the stalling state is not “internally alive”, meaning that if the observable current vanishes on average, then also should \Phi^t_{12} and \Phi^t_{34} separately. This turns out to be a physically meaningful but quite strict condition.

IS ALL OF THERMODYNAMICS “EFFECTIVE”?

Let me conclude with some more of those philosophical considerations that sadly I have to leave out of papers…

Stochastic thermodynamics strongly depends on the identification of physical and information-theoretic entropies — something that I did not openly talk about, but that lurks behind the whole construction. Throughout my short experience as researcher I have been pursuing a program of “relativization” of thermodynamics, by making the role of the observer more and more evident and movable. Inspired by Einstein’s gedankenexperimenten, I also tried to make the theory operational. This program may raise eyebrows here and there: Many thermodynamicians embrace a naïve materialistic world-view whereby what only matters are “real” physical quantities like temperature, pressure, and all the rest of the information-theoretic discourse is at best mathematical speculation or a fascinating analog with no fundamental bearings.  According to some, information as a physical concept lingers alarmingly close to certain extreme postmodern claims in the social sciences that “reality” does not exist unless observed, a position deemed dangerous at times when the authoritativeness of science is threatened by all sorts of anti-scientific waves.

I think, on the contrary, that making concepts relative and effective and by summoning the observer explicitly is a laic and prudent position that serves as an antidote to radical subjectivity. The other way around, clinging to the objectivity of a preferred observer — which is implied in any materialistic interpretation of thermodynamics, e.g. by assuming that the most fundamental degrees of freedom are the positions and velocities of gas’s molecules — is the dangerous position, expecially when the role of such preferred observer is passed around from the scientist to the technician and eventually to the technocrat, who would be induced to believe there are simple technological fixes to complex social problems

How do we reconcile observer-dependency and the laws of physics? The object and the subject? On the one hand, much like the position of an object depends on the reference frame, so much so entropy and entropy production do depend on the observer and the particular apparatus that he controls or experiment he is involved with. On the other hand, much like motion is ultimately independent of position and it is agreed upon by all observers that share compatible measurement protocols, so much so the laws of thermodynamics are independent of that particular observer’s quantification of entropy and entropy production (e.g., the effective Second Law holds independently of how much the marginal observer knows of the system, if he operates according to our phenomenological protocol…). This is the case even in the every-day thermodynamics as practiced by energetic engineers et al., where there are lots of choices to gauge upon, and there is no other external warrant that the amount of dissipation being quantified is the “true” one (whatever that means…) — there can only be trust in one’s own good practices and methodology.

So in this sense, I like to think that all observers are marginal, that this effective theory  serves as a dictionary by which different observers practice and communicate thermodynamics, and that we should not revere the laws of thermodynamics as “true
idols,  but rather as tools of good scientific practice.

REFERENCES

  • M. Polettini and M. Esposito,  Effective fluctuation and response theory, arXiv:1803.03552

In this work we give the complete theory and numerous references to work of other people that was along the same lines. We employ a “spiral” approach to the presentation of the results, inspired by the pedagogical principle of Albert Baez.

  • M. Polettini and M. Esposito,  Effective thermodynamics for a marginal observer, Phys. Rev. Lett. 119, 240601 (2017), arXiv:1703.05715

This is a shorter version of the story.

  • B. Altaner, MP, and M. Esposito, Fluctuation-Dissipation Relations Far from Equilibrium, Phys. Rev. Lett. 117, 180601 (2016), arXiv:1604.0883

Early version of the story, containing the FDR results but not the full-fledged FR.

  • G. Bisker, M. Polettini, T. R. Gingrich and J. M. Horowitz, Hierarchical bounds on entropy production inferred from partial information, J. Stat. Mech. 093210 (2017), arXiv:1708.06769

Some extras.

  • M. F. Weber and E. Frey, Master equations and the theory of stochastic path integrals, Rep. Progr. Phys. 80, 046601 (2017).

Great reference if one wishes to learn about path integrals for master equation systems.

1 There are as many so-called “Fluctuation Theorems” as there are authors working on them, so I decided not to call them by any name. Furthermore, notice I prefer to distinguish between a relation (a formula) and a theorem (a line of reasoning). I lingered more on this here.

2

“Just so you know, nobody knows what energy is”. Richard Feynman.

I cannot help but mention here the beautiful book by Shapin and Schaffer Leviathan and the air-pump about the Boyle vs. Hobbes diatribe about what constitutes a  “matter of fact,” and Bruno Latour’s interpretation of it in We have never been modern. Latour argues that “modernity” is a process of separation of the human and natural spheres, and within each of these spheres a process of purification of the unit facts of knowledge and the unit facts of politics, of the object and the subject. At the same time we live in a world where these two spheres are never truly separated, a world of “hybrids” that are at the same time necessary “for all practical purposes” and unconceivable according to the myths that sustain the narration of science, of the State, and even of religion. In fact, despite these myths, we cannot conceive a scientific fact out of the contextual “network” where this fact is produced and replicated, and neither we can conceive society out of the material needs that shape it: so in  this sense “we have never been modern”, we are not quite different from all those societies that we take pleasure of studying with the tools of anthropology. Within the scientific community Latour is widely despised; probably he is also misread. While it is really difficult to see how his analysis applies to, say, high-energy physics, I find that thermodynamics and its ties to the industrial revolution perfectly embodies this tension between the natural and the artificial, the matter of fact and the matter of concern. Such great thinkers as Einstein and Ehrenfest thought of the Second Law as the only physical law that would never be replaced, and I believe this is revelatory. A second thought on the Second Law, a systematic and precise definition of all its terms and circumstances, reveals that the only formulations that make sense are those phenomenological statements such as Kelvin-Planck’s or similar, which require a lot of contingent definitions regarding the operation of the engine, while fetished and universal statements are nonsensical (such as that masterwork of confusion that is “the entropy of the Universe cannot decrease”). In this respect, it is neither a purely natural law — as the moderns argue, nor a purely social construct — as the postmodern argue. One simply has to renounce to operate this separation. While I do not have a definite answer on this problem, I like to think of the Second Law as a practice, a consistency check of the thermodynamic discourse.

3 This assumption really belongs to a time, the XIXth century, when resources were virtually infinite on planet Earth…

4 As we will see shortly, we define equilibrium as that state where there are no currents at the interface between the system and the environment, so what is the environment’s own definition of equilibrium?!

5 This because we already exploited First Law.

6 This nomenclature comes from alchemy, via chemistry (think of Goethe’s The elective affinities…), it propagated in the XXth century via De Donder and Prigogine, and eventually it is still present in language in Luxembourg because in some way we come from the “late Brussels school”.

7 Basically, we ask that the tunable parameters are environmental properties, such as temperatures, chemical potentials, etc. and not internal properties, such as the energy landscape or the activation barriers between configurations.

21 May 17:43

Baby's hand mummified by copper coin

by Minnesotastan
The remains are currently on display at Hungary’s Móra Ferenc Museum.

From inspecting the tiny skeleton, Dr. Balázs determined the deceased was either a stillbirth or premature baby that died shortly after birth. The researchers concluded the child was 11 to 13 inches and weighed only one or two pounds...

The team concluded that before the child was placed in the pot and buried, someone put the copper coin into its hand. Many cultures in antiquity have buried their dead with coins as a way to pay a mythical ferryman to take their souls into the afterlife.

In this case, the copper’s antimicrobial properties protected the child’s hand from decay. Along with the conditions inside the vessel, it helped mummify the baby’s grasp. The team thinks this child’s burial may be one of the first reported cases in the scientific literature of copper-driven mummification. 
The rest of the story is at The New York Times.
18 May 14:12

The Pentagon Can’t Account for $21 Trillion (That’s Not a Typo)

by Donnal Walter

By Lee Camp, May 14, 2018, TruthDig.

Then-Secretary of Defense Robert Gates during a 2008 visit to Kosovo with U.S. Army troops on foot patrol in the town of Gnjilane. (The U.S. Army / CC BY 2.0)

Twenty-one trillion dollars.

The Pentagon’s own numbers show that it can’t account for $21 trillion. Yes, I mean trillion with a “T.” And this could change everything.

But I’ll get back to that in a moment.

There are certain things the human mind is not meant to do. Our complex brains cannot view the world in infrared, cannot spell words backward during orgasm and cannot really grasp numbers over a few thousand. A few thousand, we can feel and conceptualize. We’ve all been in stadiums with several thousand people. We have an idea of what that looks like (and how sticky the floor gets).

But when we get into the millions, we lose it. It becomes a fog of nonsense. Visualizing it feels like trying to hug a memory. We may know what $1 million can buy (and we may want that thing), but you probably don’t know how tall a stack of a million $1 bills is. You probably don’t know how long it takes a minimum-wage employee to make $1 million.

That’s why trying to understand—truly understand—that the Pentagon spent 21 trillion unaccounted-for dollars between 1998 and 2015 washes over us like your mother telling you that your third cousin you met twice is getting divorced. It seems vaguely upsetting, but you forget about it 15 seconds later because … what else is there to do?

Twenty-one trillion.

But let’s get back to the beginning. A couple of years ago, Mark Skidmore, an economics professor, heard Catherine Austin Fitts, former assistant secretary in the Department of Housing and Urban Development, say that the Department of Defense Office of Inspector General had found $6.5 trillion worth of unaccounted-for spending in 2015. Skidmore, being an economics professor, thought something like, “She means $6.5 billion. Not trillion. Because trillion would mean the Pentagon couldn’t account for more money than the gross domestic product of the whole United Kingdom. But still, $6.5 billion of unaccounted-for money is a crazy amount.”

So he went and looked at the inspector general’s report, and he found something interesting: It was trillion! It was fucking $6.5 trillion in 2015 of unaccounted-for spending! And I’m sorry for the cursing, but the word “trillion” is legally obligated to be prefaced with “fucking.” It is indeed way more than the U.K.’s GDP.

Skidmore did a little more digging. As Forbes reported in December 2017, “[He] and Catherine Austin Fitts … conducted a search of government websites and found similar reports dating back to 1998. While the documents are incomplete, original government sources indicate $21 trillion in unsupported adjustments have been reported for the Department of Defense and the Department of Housing and Urban Development for the years 1998-2015.”

Let’s stop and take a second to conceive how much $21 trillion is (which you can’t because our brains short-circuit, but we’ll try anyway).

1. The amount of money supposedly in the stock market is $30 trillion.

2. The GDP of the United States is $18.6 trillion.

3. Picture a stack of money. Now imagine that that stack of dollars is all $1,000 bills. Each bill says “$1,000” on it. How high do you imagine that stack of dollars would be if it were $1 trillion. It would be 63 miles high.

4. Imagine you make $40,000 a year. How long would it take you to make $1 trillion? Well, don’t sign up for this task, because it would take you 25 million years (which sounds like a long time, but I hear that the last 10 million really fly by because you already know your way around the office, where the coffee machine is, etc.).

The human brain is not meant to think about a trillion dollars.

And it’s definitely not meant to think about the $21 trillion our Department of Defense can’t account for. These numbers sound bananas. They sound like something Alex Jones found tattooed on his backside by extraterrestrials.

But the 21 trillion number comes from the Department of Defense Office of Inspector General—the OIG. Although, as Forbes pointed out, “after Mark Skidmore began inquiring about OIG-reported unsubstantiated adjustments, the OIG’s webpage, which documented, albeit in a highly incomplete manner, these unsupported “accounting adjustments,” was mysteriously taken down.”

Luckily, people had already grabbed copies of the report, which—for now—you can view here.

Here’s something else important from that Forbes article—which is one of the only mainstream media articles you can find on the largest theft in American history:

Given that the entire Army budget in fiscal year 2015 was $120 billion, unsupported adjustments were 54 times the level of spending authorized by Congress.

That’s right. The expenses with no explanation were 54 times the actual budget allotted by Congress. Well, it’s good to see Congress is doing 1/54th of its job of overseeing military spending (that’s actually more than I thought Congress was doing). This would seem to mean that 98 percent of every dollar spent by the Army in 2015 was unconstitutional.

So, pray tell, what did the OIG say caused all this unaccounted-for spending that makes Jeff Bezos’ net worth look like that of a guy jingling a tin can on the street corner?

“[The July 2016 inspector general] report indicates that unsupported adjustments are the result of the Defense Department’s ‘failure to correct system deficiencies.’

They blame trillions of dollars of mysterious spending on a “failure to correct system deficiencies”? That’s like me saying I had sex with 100,000 wild hairless aardvarks because I wasn’t looking where I was walking.

Twenty-one trillion.

Say it slowly to yourself.

At the end of the day, there are no justifiable explanations for this amount of unaccounted-for, unconstitutional spending. Right now, the Pentagon is being audited for the first time ever, and it’s taking 2,400 auditors to do it. I’m not holding my breath that they’ll actually be allowed to get to the bottom of this.

But if the American people truly understood this number, it would change both the country and the world. It means that the dollar is sprinting down a path toward worthless. If the Pentagon is hiding spending that dwarfs the amount of tax dollars coming in to the federal government, then it’s clear the government is printing however much it wants and thinking there are no consequences. Once these trillions are considered, our fiat currency has even less meaning than it already does, and it’s only a matter of time before inflation runs wild.

It also means that any time our government says it “doesn’t have money” for a project, it’s laughable. It can clearly “create” as much as it wants for bombing and death. This would explain how Donald Trump’s military can drop well over 100 bombs a day that cost well north of $1 million each.

So why can’t our government also “create” endless money for health care, education, the homeless, veterans benefits and the elderly, to make all parking free and to pay the Rolling Stones to play stoop-front shows in my neighborhood? (I’m sure the Rolling Stones are expensive, but surely a trillion dollars could cover a couple of songs.)

Obviously, our government could do those things, but it chooses not to. Earlier this month, Louisiana sent eviction notices to 30,000 elderly people on Medicaid to kick them out of their nursing homes. Yes, a country that can vomit trillions of dollars down a black hole marked “Military” can’t find the money to take care of our poor elderly. It’s a repulsive joke.

Twenty-one trillion.

Former Secretary of Defense Robert Gates spoke about how no one knows where the money is flying in the Pentagon. In a barely reported speech in 2011, he said, “My staff and I learned that it was nearly impossible to get accurate information and answers to questions such as, ‘How much money did you spend?’ and ‘How many people do you have?’

They can’t even find out how many people work for a specific department?

Note for anyone looking for a job: Just show up at the Pentagon and tell them you work there. It doesn’t seem like they’d have much luck proving you don’t.

For more on this story, check out David DeGraw’s excellent reporting at ChangeMaker.media, because the mainstream corporate media are mouthpieces for the weapons industry. They are friends with benefits of the military-industrial complex. I have seen basically nothing from the mainstream corporate media concerning this mysterious $21 trillion. I missed the time when CNN’s Wolf Blitzer said that the money we dump into war and death—either the accounted-for money or the secretive trillions—could end world hunger and poverty many times over. There’s no reason anybody needs to be starving or hungry or unsheltered on this planet, but our government seems hellbent on proving that it stands for nothing but profiting off death and misery. And our media desperately want to show they stand for nothing but propping up our morally bankrupt empire.

When the media aren’t actively promoting war, they’re filling the airwaves with shit, so the entire country can’t even hear itself think. Our whole mindscape is filled to the brim with nonsense and vacant celebrity idiocy. Then, while no one is looking, the largest theft humankind has ever seen is going on behind our backs—covered up under the guise of “national security.”

Twenty-one trillion.

Don’t forget.

If you think this column is important, please share it. And check out Lee Camp’s weekly TV show, “Redacted Tonight.”

Truthdig has launched a reader-funded project—its first ever—to document the Poor People’s Campaign. Please help us by making a donation.

The post The Pentagon Can’t Account for $21 Trillion (That’s Not a Typo) appeared first on World Beyond War . . ..

15 May 20:07

What is computational neuroscience? (XXX) Is the brain a computer?

by romain

It is sometimes stated as an obvious fact that the brain carries out computations. Computational neuroscientists sometimes see themselves as looking for the algorithms of the brain. Is it true that the brain implements algorithms? My point here is not to answer this question, but rather to show that the answer is not self-evident, and that it can only be true (if at all) at a fairly abstract level.

One line of argumentation is that models of the brain that we find in computational neuroscience (neural network models) are algorithmic in nature, since we simulate them on computers. And wouldn’t it be a sort of vitalistic claim that neural networks cannot be (in principle) simulated on computer?

There is an important confusion in this argument. At a low level, neural networks are modelled biophysically as dynamical systems, in which the temporality corresponds to the actual temporality of the real world (as opposed to the discrete temporality of algorithms). Mathematically, those are typically differential equations, possibly hybrid systems (i.e. coupled by timed pulses), in which time is a continuous variable. Those models can of course be simulated on computer using discretization schemes. For example, we choose a time step and compute the state of the network at time t+dt, from the state at time t. This algorithm, however, implements a simulation of the model; it is not the model that implements the algorithm. The discretization is nowhere to be found in the model. The model itself, being a continuous time dynamical system, is not algorithmic in nature. It is not described as a discrete sequence of operations; it is only the simulation of the model that is algorithmic, and different algorithms can simulate the same model.

If we put this confusion aside, then the claim that neural networks implement algorithms becomes not that obvious. It means that trajectories of the dynamical system can be mapped to the discrete flow of an algorithm. This requires: 1) to identify states with representations of some variables (for example stimulus properties, symbols); 2) to identify trajectories from one state to another as specific operations. In addition to that, for the algorithmic view to be of any use, there should be a sequence of operations, not just one operation (ie, describing the output as a function of the input is not an algorithmic description).

A key difficulty in this identification is temporality: the state of the dynamical system changes continuously, so how can this be mapped to discrete operations? A typical approach is neuroscience is to consider not states but properties of trajectories. For example, one would consider the average firing rate in a population of neurons in a given time window, and the rate of another population in another time window. The relation between these two rates in the context of an experiment would define an operation. As stated above, a sequence of such relations should be identified in order to qualify as an algorithm. But this mapping seems only possible within a feedforward flow; coupling poses a greater challenge for an algorithmic description. No known nervous system, however, has a feedforward connectome.

I am not claiming here that the function of the brain (or mind) cannot possibly be described algorithmically. Probably some of it can be. My point is rather that a dynamical system is not generically algorithmic. A control system, for example, is typically not algorithmic (see the detailed example of Tim van Gelder, What might cognition be if not computation?). Thus a neural dynamical system can only be seen as an algorithm at a fairly abstract level, which can probably address only a restricted subset of its function. It could be that control, which also attaches function to dynamical systems, is a more adequate metaphor of brain function than computation. Is the brain a computer? Given the rather narrow application of the algorithmic view, the reasonable answer should be: quite clearly not (maybe part of cognition could be seen as computation, but not brain function generally).

15 May 01:22

Protein synthesis in brain tissue is much higher than previously thought.

by mdbownds@wisc.edu (Deric Bownds)
Smeets et al. use stable isotope methodology during temporal lobe resection surgery to demonstrate protein synthesis rates exceeding 3% per day, suggesting that brain tissue plasticity is far greater than previously assumed.
All tissues undergo continuous reconditioning via the complex orchestration of changes in tissue protein synthesis and breakdown rates. Skeletal muscle tissue has been well studied in this regard, and has been shown to turnover at a rate of 1–2% per day in vivo in humans. Few data are available on protein synthesis rates of other tissues. Because of obvious limitations with regard to brain tissue sampling no study has ever measured brain protein synthesis rates in vivo in humans. Here, we applied stable isotope methodology to directly assess protein synthesis rates in neocortex and hippocampus tissue of six patients undergoing temporal lobectomy for drug-resistant temporal lobe epilepsy (Clinical trial registration: NTR5147). Protein synthesis rates of neocortex and hippocampus tissue averaged 0.17 ± 0.01 and 0.13 ± 0.01%/h, respectively. Brain tissue protein synthesis rates were 3–4-fold higher than skeletal muscle tissue protein synthesis rates (0.05 ± 0.01%/h; P < 0.001). In conclusion, the protein turnover rate of the human brain is much higher than previously assumed.
09 May 05:41

“We continuously increased the number of animals until statistical significance was reached to support our conclusions” . . . I think this is not so bad, actually!

by Andrew

Jordan Anaya pointed me to this post, in which Casper Albers shared this snippet from a recently-published paper from an article in Nature Communications:

The subsequent twitter discussion is all about “false discovery rate” and statistical significance, which I think completely misses the point.

The problems

Before I get to why I think the quoted statement is not so bad, let me review various things that these researchers seem to be doing wrong:

1. “Until statistical significance was reached”: This is a mistake. Statistical significance does not make sense as an inferential or decision rule.

2. “To support our conclusions”: This is a mistake. The point of an experiment should be to learn, not to support a conclusion. Or, to put it another way, if they want support for their conclusion, that’s fine, but that has nothing to do with statistical significance.

3. “Based on [a preliminary data set] we predicted that about 20 unites are sufficient to statistically support our conclusions”: This is a mistake. The purpose of a pilot study is to demonstrate the feasibility of an experiment, not to estimate the treatment effect.

OK, so, yes, based on the evidence of the above snippet, I think this paper has serious problems.

Sequential data collection is ok

That all said, I don’t have a problem, in principle, with the general strategy of continuing data collection until the data look good.

I’ve thought a lot about this one. Let me try to explain here.

First, the Bayesian argument, discussed for example in chapter 8 of BDA3 (chapter 7 in earlier editions). As long as your model includes the factors that predict data inclusion are also included in the model, you should be ok. In this case, the relevant variable is time: If there’s any possibility of time trends in your underlying process, you want to allow for that in your model. A sequential design can yield a dataset that is less robust to model assumptions, and a sequential design changes how you’ll do model checking (see chapter 6 of BDA), but from a Bayesian standpoint, you can handle these issues. Gathering data until they look good is not, from a Bayesian perspective, a “questionable research practice.”

Next, the frequentist argument, which can be summarized as, “What sorts of things might happen (more formally, what is the probability distribution of your results) if you as a researcher follow a sequential data collection rule?

Here’s what will happen. If you collect data until you attain statistical significance, then you will attain statistical significance, unless you have to give up first because you run out of time or resources. But . . . so what? Statistical significance by itself doesn’t tell you anything at all. For one thing, your result might be statistically significant in the unexpected direction, so it won’t actually confirm your scientific hypothesis. For another thing, we already know the null hypothesis of zero effect and zero systematic error is false, so we know that with enough data you’ll find significance.

Now, suppose you run your experiment a really long time and you end up with an estimated effect size of 0.002 with a standard error of 0.001 (on some scale in which an effect of 0.1 is reasonably large). Then (a) you’d have to say whatever you’ve discovered is trivial, (b) it could easily be explained by some sort of measurement bias that’s crept into the experiment, and (c) in any case, if it’s 0.002 on this group of people, it could well be -0.001 or -0.003 on another group. So in that case you’ve learned nothing useful, except that the effect almost certainly isn’t large—and that thing you’ve learned has nothing to do with the statistical significance you’ve obtained.

Or, suppose you run an experiment a short time (which seems to be what happened here) and get an estimate of 0.4 with a standard error of 0.2. Big news, right! No. Enter the statistical significance filter and type M errors (see for example section 2.1 here). That’s a concern. But, again, it has nothing to do with sequential data collection. The problem would still be there with a fixed sample size (as we’ve seen in zillions of published papers).

Summary

Based on the snippet we’ve seen, there are lots of reasons to be skeptical of the paper under discussion. But I think the criticism based on sequential data collection misses the point. Yes, sequential data collection gives the researchers one more forking path. But I think the proposal to correct for this with some sort of type 1 or false discovery adjustment rule is essentially impossible and would be pointless even if it could be done, as such corrections are all about the uninteresting null hypothesis of zero effect and zero systematic error. Better to just report and analyze the data and go from there—and recognize that, in a world of noise, you need some combination of good theory and good measurement. Statistical significance isn’t gonna save your ass, no matter how it’s computed.

P.S. Clicking through, I found this amusing article by Casper Albers, “Valid Reasons not to participate in open science practices.” As they say on the internet: Read the whole thing.

P.P.S. Next open slot is 6 Nov but I thought I’d post this right away since the discussion is happening online right now.

The post “We continuously increased the number of animals until statistical significance was reached to support our conclusions” . . . I think this is not so bad, actually! appeared first on Statistical Modeling, Causal Inference, and Social Science.

08 May 17:25

So Basically Murder

by noreply@blogger.com (Atrios)
I've long said I don't think "safety" is really the concern about self-driving cars in that if they work in a useful way they'll be safe, but that view didn't address the "actually they don't work but they're on the streets anyway" issue.


Uber has concluded the likely reason why one of its self-driving cars fatally struck a pedestrian earlier this year, according to tech outlet The Information. The car’s software recognized the victim, Elaine Herzberg, standing in the middle of the road, but decided it didn’t need to react right away, the outlet reported, citing two unnamed people briefed on the matter.

This isn't some trolley problem wank, this is just what happens when your concept doesn't work. You have to dial down the safety provisions because otherwise your dumb car is going to be bad. I mean bad in the sense of not being very useful. Killing people is, also, too, bad.
07 May 20:05

Herman-Kluk propagator is free from zero-point energy leakage. (arXiv:1805.01686v1 [physics.chem-ph])

by Max Buchholz, Erika Fallacara, Fabian Gottwald, Michele Ceotto, Frank Grossmann, Sergei D. Ivanov

Semiclassical techniques constitute a promising route to approximate quantum dynamics based on classical trajectories starting from a quantum-mechanically correct distribution. One of their main drawbacks is the so-called zero-point energy (ZPE) leakage, that is artificial redistribution of energy from the modes with high frequency and thus high ZPE to that with low frequency and ZPE due to classical equipartition. Here, we show that an elaborate semiclassical formalism based on the Herman-Kluk propagator is free from the ZPE leakage despite utilizing purely classical propagation. This finding opens the road to correct dynamical simulations of systems with a multitude of degrees of freedom that cannot be treated fully quantum-mechanically due to the exponential increase of the numerical effort.

04 May 22:45

A quick rule of thumb is that when someone seems to be acting like a jerk, an economist will defend the behavior as being the essence of morality, but when someone seems to be doing something nice, an economist will raise the bar and argue that he’s not being nice at all.

by Andrew

Like Pee Wee Herman, act like a jerk
And get on the dance floor let your body work

I wanted to follow up on a remark from a few years ago about the two modes of pop-economics reasoning:

You take some fact (or stylized fact) about the world, and then you either (1) use people-are-rational-and-who-are-we-to-judge-others reasoning to explain why some weird-looking behavior is in fact rational, or (2) use technocratic reasoning to argue that some seemingly reasonable behavior is, in fact, inefficient.

The context, as reported by Felix Salmon, was a Chicago restaurant whose owner, Grant Achatz, was selling tickets “at a fixed price and are then free to be resold at an enormous markup on the secondary market.” Economists Justin Wolfers and Betsey Stevenson objected. They wanted Achatz to increase his prices. By keeping prices low, he was, apparently, violating the principles of democracy: “‘It’s democratic in theory, but not in practice,’ said Wolfers . . . Bloomberg’s Mark Whitehouse concludes that Next should ‘consider selling tickets to the highest bidder and giving the extra money to charity.'”

I summarized as follows:

In this case, Wolfers and Whitehouse are going through some contortions to argue (2). In a different mood, however, they might go for (1). I don’t fully understand the rules for when people go with argument 1 and when they go with 2, but a quick rule of thumb is that when someone seems to be acting like a jerk, an economist will defend the behavior as being the essence of morality, but when someone seems to be doing something nice, an economist will raise the bar and argue that he’s not being nice at all.

I’m guessing that if Grant Achatz were to implement the very same pricing policy but talk about how he’s doing it solely out of greed, that a bunch of economists would show up and explain how this was actually the most moral and democratic option.

In comments, Alex wrote:

(1) and (2) are typically distinguished in economics textbooks as examples of positive and normative reasoning, respectively. The former aims at describing the observed behavior in terms of a specific model (e.g. rationality), seemingly without any attempt at subjective judgement. The latter takes the former as given and applies a subjective social welfare function to the outcomes in order to judge, whether the result could be improved upon with, say, different institutional arrangement or a policy intervention.

To which I replied:

Yup, and the usual rule seems to be to use positive reasoning when someone seems to be acting like a jerk, and normative reasoning when someone seems to be doing something nice. This seems odd to me. Why assume that, just because someone is acting like a jerk, that he is acting so efficiently that his decisions can’t be improved, only understood? And why assume that, just because someone seems to be doing something nice, that “unintended consequences” etc. ensure he’s not doing a good job of it. To me, this is contrarianism run wild. I’m not saying that Wolfers is a knee-jerk contrarian; rather I’m guessing that he’s following default behaviors without thinking much about it.

This is an awkward topic to write about. I’m not saying I think economists are mean people; they just seem to have a default mode of thought which is a little perverse.

In the traditional view of Freudian psychiatrists, which no behavior can be taken at face value, and it takes a Freudian analyst to decode the true meaning. Similarly, in the world of pop economics, or neoclassical economics, any behavior that might seem good, or generous (for example, not maxing out your prices at a popular restaurant) is seen to be damaging of the public good—“unintended consequences” and all that—, while any behavior that might seem mean, or selfish, is actually for the greater good.

Let’s unpack this in five directions, from the perspective of the philosophy of science, the sociology of scientific professions, politics, the logic of rhetoric, and the logic of statistics.

From the standpoint of the philosophy of science, pop economics or neoclassical economics is, like Freudian theory, unfalsifiable. Any behavior can be explained as rational (motivating economists’ mode 1 above) or as being open to improvement (motivating economists’ mode 2 of reasoning). Economists can play two roles: (1) to reassure people that the current practices are just fine and to use economic theory to explain the hidden benefits arising from seemingly irrational or unkind decisions; or (2) to improve people’s lives through rational and cold but effective reasoning (the famous “thinking like an economist”). For flexible Freudians, just about any behavior can be explained by just about any childhood trauma; and for modern economists, just about any behavior can be interpreted as a rational adaptation—or not. In either case, specific applications of the method can be falsified—after all, Freudians and neoclassical economists alike are free to make empirically testable predictions—but the larger edifice is unfalsifiable, as any erroneous prediction can simply be explained as an inappropriate application of the theory.

From a sociological perspective, the flexibility of pop-economics reasoning, like the flexibility of Freudian theory, can be seen as a plus, in that it implies a need for trained specialists, priests who can know which childhood trauma to use as an explanation, or who can decide whether to use economics’s explanation 1 or 2. Again, recall economists’ claims that they think in a different, more piercing, way than other scholars, an attitude that is reminiscent of old-school Freudians’ claim to look squarely at the cold truths of human nature that others can’t handle.

The political angle is more challenging. Neoclassical economics is sometimes labeled as conservative, in that explanation 1 (the everything-is-really-ok story) can be used to justify existing social and economic structures; on the other hand, such arguments can also be used to justify existing structures with support on the left. And, for that matter, economist Justin Wolfers, quoted above, is I believe a political liberal in the U.S. context. So it’s hard for me to put this discussion on the left or the right; maybe best just to say that pop-econ reasoning is flexible enough to go in either political direction, or even both at once.

When it comes to analyzing the logic of economic reasoning, I keep thinking about Albert Hirschman’s book, The Rhetoric of Reaction. I feel that the ability to bounce back and forth between arguments 1 and 2 is part of what gives pop economics, or microeconomics more generally, some of its liveliness and power. If you only apply argument 1—explaining away all of human behavior, however ridiculous, as rational and desirable, then you’re kinda talking yourself out of a job: as an economist, you become a mere explainer, not a problem solver. On the other hand, if you only apply argument 2—studying how to approach optimal behavior in situation after situation—then you become a mere technician. By having the flexibility of which argument to use in any given setting, you can be unpredictable. Unpredictability is a source of power and can also make you more interesting.

Finally, I can give a statistical rationale for the rule of thumb given in the title of this post. It’s Bayesian reasoning; that is, partial pooling. If you look at the population distribution of all the things that people do, some of these actions have positive effects, some have negative effects, and most effects are small. So if you receive a noisy signal that someone did something positive, the appropriate response is to partially pool toward zero and to think of reasons why this apparently good deed was, on net, not so wonderful at all. Conversely, when you hear about something that sounds bad, you can partially pool toward zero from the other direction.

Just look at the crowd. Say, “I meant to do that.”

The post A quick rule of thumb is that when someone seems to be acting like a jerk, an economist will defend the behavior as being the essence of morality, but when someone seems to be doing something nice, an economist will raise the bar and argue that he’s not being nice at all. appeared first on Statistical Modeling, Causal Inference, and Social Science.

02 May 19:42

On Unlimited Sampling

by Igor


Ayush Bhandari just let me know about the interesting approach of Unlimited Sampling in an email exchange:

...In practice, ADCs clip or saturate whenever the amplitude of signal x exceeds ADC threshold L. Typical solution is to de-clip the signal for which purpose various methods have been proposed.

Based on a new ADC hardware which allows for sampling using the principle 
y = mod(x,L)

where x is bandlimited and L is the ADC threshold, we show that Nyquist rate about \pi e (~10) times faster guarantees recovery of x from y. For this purpose we outline a new, stable recovery procedure.

Paper and slides are here.

There is also the PhysOrg coverage. Thanks Ayush ! Here is the paper:


Shannon's sampling theorem provides a link between the continuous and the discrete realms stating that bandlimited signals are uniquely determined by its values on a discrete set. This theorem is realized in practice using so called analog--to--digital converters (ADCs). Unlike Shannon's sampling theorem, the ADCs are limited in dynamic range. Whenever a signal exceeds some preset threshold, the ADC saturates, resulting in aliasing due to clipping. The goal of this work is to analyze an alternative approach that does not suffer from these problems. Our work is based on recent developments in ADC design, which allow for ADCs that reset rather than to saturate, thus producing modulo samples. An open problem that remains is: Given such modulo samples of a bandlimited function as well as the dynamic range of the ADC, how can the original signal be recovered and what are the sufficient conditions that guarantee perfect recovery? In this work, we prove such sufficiency conditions and complement them with a stable recovery algorithm. Our results are not limited to certain amplitude ranges, in fact even the same circuit architecture allows for the recovery of arbitrary large amplitudes as long as some estimate of the signal norm is available when recovering. Numerical experiments that corroborate our theory indeed show that it is possible to perfectly recover function that takes values that are orders of magnitude higher than the ADC's threshold.

h/t Laurent.
30 Apr 21:48

Classification of Phase Transitions by Microcanonical Inflection-Point Analysis

by Kai Qi and Michael Bachmann

Author(s): Kai Qi and Michael Bachmann

A statistical analysis method allows for the unambiguous identification of a phase transition via inflection points of the system’s microcanonical entropy and its derivatives.


[Phys. Rev. Lett. 120, 180601] Published Mon Apr 30, 2018

30 Apr 19:12

Get Your War On

by noreply@blogger.com (Atrios)
We all lived through that hell decade of the aughts. We weirdly managed to have some fun! When the history of this era is written, it probably will not include the most important artistic contributions of that time, because that's the way the world works.

David Rees captured that moment - in the moment - in a way that nobody else did.

While we are on rerun Sunday, take a quick look.



I don't know everything about David - we almost met a couple of times - but I remember him fundraising and donating to landmine removal in Afghanistan. There is an obvious point about where the money is.