Shared posts

20 Oct 22:12

Space may not be as immaterial as we thought

by Sabine Hossenfelder
Galaxy slime. [Img Src] Physicists have gathered evidence that space-time can behave like a fluid. Mathematical evidence, that is, but still evidence. If this relation isn’t a coincidence, then space-time – like a fluid – may have substructure. We shouldn’t speak of space and time as if the two were distant cousins. We have known at least since Einstein that space and time are inseparable,
20 Oct 21:27

Social behaviour shapes hypothalamic neural ensemble representations of conspecific sex

by Ryan Remedios

Social behaviour shapes hypothalamic neural ensemble representations of conspecific sex

Nature 550, 7676 (2017). doi:10.1038/nature23885

Authors: Ryan Remedios, Ann Kennedy, Moriel Zelikowsky, Benjamin F. Grewe, Mark J. Schnitzer & David J. Anderson

All animals possess a repertoire of innate (or instinctive) behaviours, which can be performed without training. Whether such behaviours are mediated by anatomically distinct and/or genetically specified neural pathways remains unknown. Here we report that neural representations within the mouse hypothalamus, that underlie innate social behaviours, are shaped by social experience. Oestrogen receptor 1-expressing (Esr1+) neurons in the ventrolateral subdivision of the ventromedial hypothalamus (VMHvl) control mating and fighting in rodents. We used microendoscopy to image Esr1+ neuronal activity in the VMHvl of male mice engaged in these social behaviours. In sexually and socially experienced adult males, divergent and characteristic neural ensembles represented male versus female conspecifics. However, in inexperienced adult males, male and female intruders activated overlapping neuronal populations. Sex-specific neuronal ensembles gradually separated as the mice acquired social and sexual experience. In mice permitted to investigate but not to mount or attack conspecifics, ensemble divergence did not occur. However, 30 minutes of sexual experience with a female was sufficient to promote the separation of male and female ensembles and to induce an attack response 24 h later. These observations uncover an unexpected social experience-dependent component to the formation of hypothalamic neural assemblies controlling innate social behaviours. More generally, they reveal plasticity and dynamic coding in an evolutionarily ancient deep subcortical structure that is traditionally viewed as a ‘hard-wired’ system.

20 Oct 21:07

A brief critique of predictive coding

by admin

Predictive coding is becoming a popular theory in neuroscience (see for example Clark 2013). In a nutshell, the general idea is that brains encode predictions of their sensory inputs. This is an appealing idea because superficially, it makes a lot of sense: functionally, the only reason why you would want to process sensory information is if it might impact your future, so it makes sense to try to predict your sensory inputs.

There are substantial problems in the details of predictive coding theories, for example with the arbitrariness of the metric by which you judge that your prediction matches sensory inputs (what is important?), or the fact that predictive coding schemes encode both noise and signal. But I want to focus on the more fundamental problems. One has to with “coding”, the other with “predictive”.

It makes sense that brains anticipate. But does it make sense that brains code? Coding is a metaphor of a communication channel, and this is generally not a great metaphor for what the brain might do, unless you fully embrace dualism. I discuss this at length in a recent paper (Is coding a relevant metaphor for the brain?) so I won’t repeat the entire argument here. Predictive coding is a branch of efficient coding, so the same fallacy underlies its logic: 1) neurons encode sensory inputs; 2) living organisms are efficient; => brains must encode efficiently. (1) is trivially true in the sense that one can define a mapping from sensory inputs to neural activity. (2) is probably true to some extent (evolutionary arguments). So the conclusion follows. Critiques of efficient coding have focused on the “efficient” part: maybe the brain is not that efficient after all. But the error is elsewhere: living organisms are certainly efficient, but it doesn’t follow that they are efficient at coding. They might be efficient at surviving and reproducing, and it is not obvious that it entails coding efficiency (see the last part of the abovementioned paper for a counter-example). So the real strong assumption is there: the main function of the brain is to represent sensory inputs.

The second problem has to with “predictive”. It makes sense that an important function of brains, or in fact of any living organism, is to anticipate (see the great Anticipatory Systems by Robert Rosen). But to what extent do predictive coding schemes actually anticipate? First, in practice, those are generally not prediction schemes but compression schemes, in the sense that they do not tell us what will happen next but what happens now. This is at least the case of the classical Rao & Ballard (1999). Neurons encode the difference between expected input and actual input: this is compression, not prediction. It uses a sort of prediction in order to compress: other neurons (in higher layers) produce predictions of the inputs to those neurons, but the term prediction is used in the sense that the inputs are not known to the higher layer neurons, not that the “prediction” occurs before the inputs. Thus the term “predictive” is misleading because it is not used in a temporal sense.

However, it is relatively easy to imagine how predictive coding might be about temporal predictions, although the neural implementation is not straightforward (delays etc). So I want to make a deeper criticism. I started by claiming that it is useful to predict sensory inputs. I am taking this back (I can because I said it was superficial reasoning). It is not useful to know what will happen. What is useful is to know what might happen, depending on what you do. If there is nothing you can do about the future, what is the functional use of predicting it? So what is useful is to predict the future conditionally to a different set of potential actions. This is about manipulating models of the world, not representing the present.

09 Oct 21:03

Particle transport across a channel via an oscillating potential. (arXiv:1710.02346v1 [physics.bio-ph])

by Yizhou Tan, Leonardo Dagdug, Jannes Gladrow, Ulrich F. Keyser, Stefano Pagliara

Membrane protein transporters alternate their substrate-binding sites between the extracellular and cytosolic side of the membrane according to the alternating access mechanism. Inspired by this intriguing mechanism devised by nature, we study particle transport through a channel coupled with an energy well that oscillates its position between the two entrances of the channel. We optimize particle transport across the channel by adjusting the oscillation frequency. At the optimal oscillation frequency, the translocation rate through the channel is a hundred times higher with respect to free diffusion across the channel. Our findings reveal the effect of time dependent potentials on particle transport across a channel and will be relevant for membrane transport and microfluidics application.

07 Oct 00:05

Vladimir Voevodsky, 1966 — 2017

by John Baez



Vladimir Voevodsky died last week. He won the Fields Medal in 2002 for proving the Milnor conjecture in a branch of algebra known as algebraic K-theory. He continued to work on this subject until he helped prove the more general Bloch–Kato conjecture in 2010.

Proving these results—which are too technical to easily describe to nonmathematicians!—required him to develop a dream of Grothendieck: the theory of motives. Very roughly, this is a way of taking the space of solutions of some polynomial equations and chopping it apart into building blocks. But this process of ‘chopping’ and also these building blocks, called ‘motives’, are very abstract—nothing easy to visualize.

It’s a bit like how a proton is made of quarks. You never actually see a quark in isolation, so you have to think very hard to realize they are there at all. But once you know this, a lot of things become clear.

This is wonderful, profound mathematics. But in the process of proving the Bloch-Kato conjecture, Voevodsky became tired of this stuff. He wanted to do something more useful… and more ambitious. He later said:

It was very difficult. In fact, it was 10 years of technical work on a topic that did not interest me during the last 5 of these 10 years. Everything was done only through willpower.

Since the autumn of 1997, I already understood that my main contribution to the theory of motives and motivic cohomology was made. Since that time I have been very conscious and actively looking for. I was looking for a topic that I would deal with after I fulfilled my obligations related to the Bloch-Kato hypothesis.

I quickly realized that if I wanted to do something really serious, then I should make the most of my accumulated knowledge and skills in mathematics. On the other hand, seeing the trends in the development of mathematics as a science, I realized that the time is coming when the proof of yet another conjecture won’t have much of an effect. I realized that mathematics is on the verge of a crisis, or rather, two crises.

The first is connected with the separation of “pure” and applied mathematics. It is clear that sooner or later there will be a question about why society should pay money to people who are engaged in things that do not have any practical applications.

The second, less obvious, is connected with the complication of pure mathematics, which leads to the fact that, sooner or later, the articles will become too complicated for detailed verification and the process of accumulating undetected errors will begin. And since mathematics is a very deep science, in the sense that the results of one article usually depend on the results of many and many previous articles, this accumulation of errors for mathematics is very dangerous.

So, I decided, you need to try to do something that will help prevent these crises. For the first crisis, this meant that it was necessary to find an applied problem that required for its solution the methods of pure mathematics developed in recent years or even decades.

He looked for such a problem. He studied biology and found an interesting candidate. He worked on it very hard, but then decided he’d gone down a wrong path:

Since childhood I have been interested in natural sciences (physics, chemistry, biology), as well as in the theory of computer languages, and since 1997, I have read a lot on these topics, and even took several student and post-graduate courses. In fact, I “updated” and deepened the knowledge that had to a very large extent. All this time I was looking for that I recognized open problems that would be of interest to me and to which I could apply modern mathematics.

As a result, I chose, I now understand incorrectly, the problem of recovering the history of populations from their modern genetic composition. I took on this task for a total of about two years, and in the end, already by 2009, I realized that what I was inventing was useless. In my life, so far, it was, perhaps, the greatest scientific failure. A lot of work was invested in the project, which completely failed. Of course, there was some benefit, of course—I learned a lot of probability theory, which I knew badly, and also learned a lot about demography and demographic history.

But he bounced back! He came up with a new approach to the foundations of mathematics, and helped organize a team at the Institute of Advanced Studies at Princeton to develop it further. This approach is now called homotopy type theory or univalent foundations. It’s fundamentally different from set theory. It treats the fundamental concept of equality in a brand new way! And it’s designed to be done with the help of computers.

It seems he started down this new road when the mathematician Carlos Simpson pointed out a serious mistake in a paper he’d written.

I think it was at this moment that I largely stopped doing what is called “curiosity-driven research” and started to think seriously about the future. I didn’t have the tools to explore the areas where curiosity was leading me and the areas that I considered to be of value and of interest and of beauty.

So I started to look into what I could do to create such tools. And it soon became clear that the only long-term solution was somehow to make it possible for me to use computers to verify my abstract, logical, and mathematical constructions. The software for doing this has been in development since the sixties. At the time, when I started to look for a practical proof assistant around 2000, I could not find any. There were several groups developing such systems, but none of them was in any way appropriate for the kind of mathematics for which I needed a system.

When I first started to explore the possibility, computer proof verification was almost a forbidden subject among mathematicians. A conversation about the need for computer proof assistants would invariably drift to Gödel’s incompleteness theorem (which has nothing to do with the actual problem) or to one or two cases of verification of already existing proofs, which were used only to demonstrate how impractical the whole idea was. Among the very few mathematicians who persisted in trying to advance the field of computer verification in mathematics during this time were Tom Hales and Carlos Simpson. Today, only a few years later, computer verification of proofs and of mathematical reasoning in general looks completely practical to many people who work on univalent foundations and homotopy type theory.

The primary challenge that needed to be addressed was that the foundations of mathematics were unprepared for the requirements of the task. Formulating mathematical reasoning in a language precise enough for a computer to follow meant using a foundational system of mathematics not as a standard of consistency to establish a few fundamental theorems, but as a tool that can be employed in ­everyday mathematical work. There were two main problems with the existing foundational systems, which made them inadequate. Firstly, existing foundations of mathematics were based on the languages of predicate logic and languages of this class are too limited. Secondly, existing foundations could not be used to directly express statements about such objects as, for example, the ones in my work on 2-theories.

Still, it is extremely difficult to accept that mathematics is in need of a completely new foundation. Even many of the people who are directly connected with the advances in homotopy type theory are struggling with this idea. There is a good reason: the existing foundations of mathematics—ZFC and category theory—have been very successful. Overcoming the appeal of category theory as a candidate for new foundations of mathematics was for me personally the most challenging.

Homotopy type theory is now a vital and exciting area of mathematics. It’s far from done, and to make it live up to Voevodsky’s dreams will require brand new ideas—not just incremental improvements, but actual sparks of genius. For some of the open problems, see Mike Shulman’s comment on the n-Category Café, and some replies to that.

I only met him a few times, but as far as I can tell Voevodsky was a completely unpretentious person. You can see that in the picture here.

He was also a very complex person. For example, you might not guess that he took great wildlife photos:

You also might not guess at this side of him:

In 2006-2007 a lot of external and internal events happened to me, after which my point of view on the questions of the “supernatural” changed significantly. What happened to me during these years, perhaps, can be compared most closely to what happened to Karl Jung in 1913-14. Jung called it “confrontation with the unconscious”. I do not know what to call it, but I can describe it in a few words. Remaining more or less normal, apart from the fact that I was trying to discuss what was happening to me with people whom I should not have discussed it with, I had in a few months acquired a very considerable experience of visions, voices, periods when parts of my body did not obey me, and a lot of incredible accidents. The most intense period was in mid-April 2007 when I spent 9 days (7 of them in the Mormon capital of Salt Lake City), never falling asleep for all these days.

Almost from the very beginning, I found that many of these phenomena (voices, visions, various sensory hallucinations), I could control. So I was not scared and did not feel sick, but perceived everything as something very interesting, actively trying to interact with those “beings” in the auditorial, visual and then tactile spaces that appeared around me (by themselves or by invoking them). I must say, probably, to avoid possible speculations on this subject, that I did not use any drugs during this period, tried to eat and sleep a lot, and drank diluted white wine.

Another comment: when I say “beings”, naturally I mean what in modern terminology are called complex hallucinations. The word “beings” emphasizes that these hallucinations themselves “behaved”, possessed a memory independent of my memory, and reacted to attempts at communication. In addition, they were often perceived in concert in various sensory modalities. For example, I played several times with a (hallucinated) ball with a (hallucinated) girl—and I saw this ball, and felt it with my palm when I threw it.

Despite the fact that all this was very interesting, it was very difficult. It happened for several periods, the longest of which lasted from September 2007 to February 2008 without breaks. There were days when I could not read, and days when coordination of movements was broken to such an extent that it was difficult to walk.

I managed to get out of this state due to the fact that I forced myself to start math again. By the middle of spring 2008 I could already function more or less normally and even went to Salt Lake City to look at the places where I wandered, not knowing where I was, in the spring of 2007.

In short, he was a genius akin to Cantor or Grothendieck, at times teetering on the brink of sanity, yet gripped by an immense desire for beauty and clarity, engaging in struggles that gripped his whole soul. From the fires of this volcano, truly original ideas emerge.

This last quote, and the first few quotes, are from some interviews in Russian, done by Roman Mikhailov, which Mike Stay pointed out to me. I used Google Translate and polished the results a bit:

Интервью Владимира Воеводского (часть 1), 1 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 1).

Интервью Владимира Воеводского (часть 2), 5 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 2).

The quote about the origins of ‘univalent foundations’ comes from his nice essay here:

• Vladimir Voevodsky, The origins and motivations of univalent foundations, 2014.

There’s also a good obituary of Voevodsky explaining its relation to Grothendieck’s idea in simple terms:

• Institute for Advanced Studies, Vladimir Voevodsky 1966–2017, 4 October 2017.

The photograph of Voevodsky is from Andrej Bauer’s website:

• Andrej Bauer, Photos of mathematicians.

To learn homotopy type theory, try this great book:

Homotopy Type Theory: Univalent Foundations of Mathematics, The Univalent Foundations Program, Institute for Advanced Study.


04 Oct 03:03

Hierarchical modeling of molecular energies using a deep neural network. (arXiv:1710.00017v1 [stat.ML])

by Nicholas Lubbers, Justin S. Smith, Kipton Barros

We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network--a composition of many nonlinear transformations--acting on a representation of the molecule. HIP-NN achieves state-of-the-art performance on a dataset of 131k ground state organic molecules, and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.

29 Sep 18:55

Spiking neurons with short-term synaptic plasticity form superior generative networks. (arXiv:1709.08166v3 [cs.NE] UPDATED)

by Luziwei Leng, Roman Martel, Oliver Breitwieser, Ilja Bytschok, Walter Senn, Johannes Schemmel, Karlheinz Meier, Mihai A. Petrovici

Spiking networks that perform probabilistic inference have been proposed both as models of cortical computation and as candidates for solving problems in machine learning. However, the evidence for spike-based computation being in any way superior to non-spiking alternatives remains scarce. We propose that short-term plasticity can provide spiking networks with distinct computational advantages compared to their classical counterparts. In this work, we use networks of leaky integrate-and-fire neurons that are trained to perform both discriminative and generative tasks in their forward and backward information processing paths, respectively. During training, the energy landscape associated with their dynamics becomes highly diverse, with deep attractor basins separated by high barriers. Classical algorithms solve this problem by employing various tempering techniques, which are both computationally demanding and require global state updates. We demonstrate how similar results can be achieved in spiking networks endowed with local short-term synaptic plasticity. Additionally, we discuss how these networks can even outperform tempering-based approaches when the training data is imbalanced. We thereby show how biologically inspired, local, spike-triggered synaptic dynamics based simply on a limited pool of synaptic resources can allow spiking networks to outperform their non-spiking relatives.

29 Sep 18:36

The University in ruins /1

by tomate

The materialistic transparency of culture 
has not made it more honest, only more vulgar 
(Th. Adorno)

From Bill Readings, The University in ruins:

I am attracted to Robert Young’s suggestion that the University, both inside and outside the market economy, should “function as a surplus that the economy cannot comprehend’’. The binary opposition is there, and the University will deconstruct it by being neither simply useful nor simply useless. All very good, and very much what Humboldt wanted: indirect utility, direct uselessness for the state.”

This sentence makes for a perfect answer to all those people who are trying to sex up the University courses by bringing more industry into education. Anyway, the book by Readings is good and dense with concepts that its’s almost impossible to choose which excerpts I like most, I’d have to re-write it all on my blog… Here’s how the academia threats the arts by normalizing them:

“Rather than posing a threat, the analyses of Cultural Studies risk providing new marketing opportunities for the system. Practices such as punk music and dress styles are offered their self-consciousness in academic essays, but the dignity they acquire is not that of authenticity but of marketability, be it in the cinema, on MTV, or as a site of tourist interest for visitor to London. […] To put it bluntly, the shock value of punk is not lasting in a cultural sense, since it soon becomes possible to be “excellently punk”.

Here academia works as an advanced tool to extrapolate culture out of the context where it was born as a social practice, to make into a product of “culture” for a community of rich and educated people who have never been punk and never wished to be.


22 Sep 04:10

Location of the Mesopontine Neurons Responsible for Maintenance of Anesthetic Loss of Consciousness

by Minert, A., Yatziv, S.-L., Devor, M.

The transition from wakefulness to general anesthesia is widely attributed to suppressive actions of anesthetic molecules distributed by the systemic circulation to the cerebral cortex (for amnesia and loss of consciousness) and to the spinal cord (for atonia and antinociception). An alternative hypothesis proposes that anesthetics act on one or more brainstem or diencephalic nuclei, with suppression of cortex and spinal cord mediated by dedicated axonal pathways. Previously, we documented induction of an anesthesia-like state in rats by microinjection of small amounts of GABAA-receptor agonists into an upper brainstem region named the mesopontine tegmental anesthesia area (MPTA). Correspondingly, lesioning this area rendered animals resistant to systemically delivered anesthetics. Here, using rats of both sexes, we applied a modified microinjection method that permitted localization of the anesthetic-sensitive neurons with much improved spatial resolution. Microinjected at the MPTA hotspot identified, exposure of 1900 or fewer neurons to muscimol was sufficient to sustain whole-body general anesthesia; microinjection as little as 0.5 mm off-target did not. The GABAergic anesthetics pentobarbital and propofol were also effective. The GABA-sensitive cell cluster is centered on a tegmental (reticular) field traversed by fibers of the superior cerebellar peduncle. It has no specific nuclear designation and has not previously been implicated in brain-state transitions.

SIGNIFICANCE STATEMENT General anesthesia permits pain-free surgery. Furthermore, because anesthetic agents have the unique ability to reversibly switch the brain from wakefulness to a state of unconsciousness, knowing how and where they work is a potential route to unraveling the neural mechanisms that underlie awareness itself. Using a novel method, we have located a small, and apparently one of a kind, cluster of neurons in the mesopontine tegmentum that are capable of effecting brain-state switching when exposed to GABAA-receptor agonists. This action appears to be mediated by a network of dedicated axonal pathways that project directly and/or indirectly to nearby arousal nuclei of the brainstem and to more distant targets in the forebrain and spinal cord.

19 Sep 16:09

MANAFORT

by noreply@blogger.com (Atrios)
His name almost, but not quite, movie villain scary.

Washington (CNN)US investigators wiretapped former Trump campaign chairman Paul Manafort under secret court orders before and after the election, sources tell CNN, an extraordinary step involving a high-ranking campaign official now at the center of the Russia meddling probe.

The government snooping continued into early this year, including a period when Manafort was known to talk to President Donald Trump.

Life comes at us fast.

WASHINGTON — Paul J. Manafort was in bed early one morning in July when federal agents bearing a search warrant picked the lock on his front door and raided his Virginia home. They took binders stuffed with documents and copied his computer files, looking for evidence that Mr. Manafort, President Trump’s former campaign chairman, set up secret offshore bank accounts. They even photographed the expensive suits in his closet.

The special counsel, Robert S. Mueller III, then followed the house search with a warning: His prosecutors told Mr. Manafort they planned to indict him, said two people close to the investigation.

Also from the first article:
The conversations between Manafort and Trump continued after the President took office, long after the FBI investigation into Manafort was publicly known, the sources told CNN. They went on until lawyers for the President and Manafort insisted that they stop, according to the sources.

Dumbest crooks on the planet.
15 Sep 00:46

A cargo-sorting DNA robot

by Thubagere, A. J., Li, W., Johnson, R. F., Chen, Z., Doroudi, S., Lee, Y. L., Izatt, G., Wittman, S., Srinivas, N., Woods, D., Winfree, E., Qian, L.
Nosimpler

This looked cool. Then I looked for Ashwin's name, and found a surprise instead.

Two critical challenges in the design and synthesis of molecular robots are modularity and algorithm simplicity. We demonstrate three modular building blocks for a DNA robot that performs cargo sorting at the molecular level. A simple algorithm encoding recognition between cargos and their destinations allows for a simple robot design: a single-stranded DNA with one leg and two foot domains for walking, and one arm and one hand domain for picking up and dropping off cargos. The robot explores a two-dimensional testing ground on the surface of DNA origami, picks up multiple cargos of two types that are initially at unordered locations, and delivers them to specified destinations until all molecules are sorted into two distinct piles. The robot is designed to perform a random walk without any energy supply. Exploiting this feature, a single robot can repeatedly sort multiple cargos. Localization on DNA origami allows for distinct cargo-sorting tasks to take place simultaneously in one test tube or for multiple robots to collectively perform the same task.

15 Sep 00:43

Emergent cortical circuit dynamics contain dense, interwoven ensembles of spike sequences

by Dechery, J. B., MacLean, J. N.

Temporal codes are theoretically powerful encoding schemes, but their precise form in the neocortex remains unknown in part because of the large number of possible codes and the difficulty in disambiguating informative spikes from statistical noise. A biologically plausible and computationally powerful temporal coding scheme is the Hebbian assembly phase sequence (APS), which predicts reliable propagation of spikes between functionally related assemblies of neurons. Here, we sought to measure the inherent capacity of neocortical networks to produce reliable sequences of spikes, as would be predicted by an APS code. To record microcircuit activity, the scale at which computation is implemented, we used two-photon calcium imaging to densely sample spontaneous activity in murine neocortical networks ex vivo. We show that the population spike histogram is sufficient to produce a spatiotemporal progression of activity across the population. To more comprehensively evaluate the capacity for sequential spiking that cannot be explained by the overall population spiking, we identify statistically significant spike sequences. We found a large repertoire of sequence spikes that collectively comprise the majority of spiking in the circuit. Sequences manifest probabilistically and share neuron membership, resulting in unique ensembles of interwoven sequences characterizing individual spatiotemporal progressions of activity. Distillation of population dynamics into its constituent sequences provides a way to capture trial-to-trial variability and may prove to be a powerful decoding substrate in vivo. Informed by these data, we suggest that the Hebbian APS be reformulated as interwoven sequences with flexible assembly membership due to shared overlapping neurons.

NEW & NOTEWORTHY Neocortical computation occurs largely within microcircuits comprised of individual neurons and their connections within small volumes (<500 μm3). We found evidence for a long-postulated temporal code, the Hebbian assembly phase sequence, by identifying repeated and co-occurring sequences of spikes. Variance in population activity across trials was explained in part by the ensemble of active sequences. The presence of interwoven sequences suggests that neuronal assembly structure can be variable and is determined by previous activity.

30 Aug 22:01

Molecular machines open cell membranes

by Víctor García-López

Molecular machines open cell membranes

Nature 548, 7669 (2017). doi:10.1038/nature23657

Authors: Víctor García-López, Fang Chen, Lizanne G. Nilewski, Guillaume Duret, Amir Aliyan, Anatoly B. Kolomeisky, Jacob T. Robinson, Gufeng Wang, Robert Pal & James M. Tour

Beyond the more common chemical delivery strategies, several physical techniques are used to open the lipid bilayers of cellular membranes. These include using electric and magnetic fields, temperature, ultrasound or light to introduce compounds into cells, to release molecular species from cells or to selectively induce programmed cell death (apoptosis) or uncontrolled cell death (necrosis). More recently, molecular motors and switches that can change their conformation in a controlled manner in response to external stimuli have been used to produce mechanical actions on tissue for biomedical applications. Here we show that molecular machines can drill through cellular bilayers using their molecular-scale actuation, specifically nanomechanical action. Upon physical adsorption of the molecular motors onto lipid bilayers and subsequent activation of the motors using ultraviolet light, holes are drilled in the cell membranes. We designed molecular motors and complementary experimental protocols that use nanomechanical action to induce the diffusion of chemical species out of synthetic vesicles, to enhance the diffusion of traceable molecular machines into and within live cells, to induce necrosis and to introduce chemical species into live cells. We also show that, by using molecular machines that bear short peptide addends, nanomechanical action can selectively target specific cell-surface recognition sites. Beyond the in vitro applications demonstrated here, we expect that molecular machines could also be used in vivo, especially as their design progresses to allow two-photon, near-infrared and radio-frequency activation.

16 Aug 17:10

Pavlovian conditioning-induced hallucinations result from overweighting of perceptual priors

by Powers, A. R., Mathys, C., Corlett, P. R.

Some people hear voices that others do not, but only some of those people seek treatment. Using a Pavlovian learning task, we induced conditioned hallucinations in four groups of people who differed orthogonally in their voice-hearing and treatment-seeking statuses. People who hear voices were significantly more susceptible to the effect. Using functional neuroimaging and computational modeling of perception, we identified processes that differentiated voice-hearers from non–voice-hearers and treatment-seekers from non–treatment-seekers and characterized a brain circuit that mediated the conditioned hallucinations. These data demonstrate the profound and sometimes pathological impact of top-down cognitive processes on perception and may represent an objective means to discern people with a need for treatment from those without.

16 Aug 03:41

The complete connectome of a learning and memory centre in an insect brain

by Katharina Eichler

The complete connectome of a learning and memory centre in an insect brain

Nature 548, 7666 (2017). doi:10.1038/nature23455

Authors: Katharina Eichler, Feng Li, Ashok Litwin-Kumar, Youngser Park, Ingrid Andrade, Casey M. Schneider-Mizell, Timo Saumweber, Annina Huser, Claire Eschbach, Bertram Gerber, Richard D. Fetter, James W. Truman, Carey E. Priebe, L. F. Abbott, Andreas S. Thum, Marta Zlatic & Albert Cardona

Associating stimuli with positive or negative reinforcement is essential for survival, but a complete wiring diagram of a higher-order circuit supporting associative memory has not been previously available. Here we reconstruct one such circuit at synaptic resolution, the Drosophila larval mushroom body. We find

16 Aug 03:40

Synopsis: Electronic Tagging for Cells

Researchers have made a radio-frequency identification device that fits inside a cell.


[Physics] Published Wed Jul 26, 2017

16 Aug 03:33

Cosmology for the Curious

by woit
Nosimpler

This goes along with that article about fantasyland.

There’s a new college-level textbook out, Cosmology for the Curious, targeted at physics courses designed to explain basics of cosmology to non-physics majors. The authors are Delia Perlov and Alex Vilenkin. Back in 2006 Vilenkin published a popular book promoting the multiverse, Many Worlds in One, which I wrote about at the time, making the obvious comment that there was nothing like a testable experimental prediction to be found in the book. It seemed to me then that the physics community would never take seriously an inherently untestable theory, recognizing such a thing as pseudo-science. I thought that the only reason claims like those of Vilenkin were getting any attention was that they had some novelty. Surely after a few more years of attempts to extract a prediction of some sort led to nothing, the emptiness of this sort of idea would become clear to all and everyone would lose interest.

Eleven years later I’m as baffled by what has happened to the field of fundamental physics as I’m baffled by what has happened to democracy in the US. As all attempts to extract a testable prediction from the multiverse have failed, instead of going away, pseudo-science has become ever more dominant, with a hugely successful publicity campaign (including a lot of “Fake Physics”) overcoming scientific failure. Now this sort of thing is moving from speculative pop science to getting the status of accepted science, taught as such to undergraduates.

Many are worried about the status of science in our society, as it faces new challenges. I don’t see how the physics community is going to continue to have any credibility with the rest of society if it sits back and allows multiverse mania to enter the canon. Non-scientists taking science classes need to be taught about the importance of always asking: what would it take to show that this theory is wrong? how do I know this is science not ideology?

Any student who reads this textbook and looks for answers to these questions in it will find just two “tests” of the multiverse proposed:

  • Look for evidence of bubble collisions.
  • Believe this paper, and then if you find a black hole population with a certain kind of mass spectrum, that would be evidence for the multiverse.

Of course there is no evidence for bubble collisions or such a black hole population, but these are no-lose “tests”: no matter what you observe or don’t observe, the multiverse “theory” can only win, it can never lose. Is it really a good idea to teach courses telling college students that this is how science works?

15 Aug 14:12

The Kolmogorov option

by Scott
Nosimpler

See the comments too.

Andrey Nikolaevich Kolmogorov was one of the giants of 20th-century mathematics.  I’ve always found it amazing that the same man was responsible both for establishing the foundations of classical probability theory in the 1930s, and also for co-inventing the theory of algorithmic randomness (a.k.a. Kolmogorov complexity) in the 1960s, which challenged the classical foundations, by holding that it is possible after all to talk about the entropy of an individual object, without reference to any ensemble from which the object was drawn.  Incredibly, going strong into his eighties, Kolmogorov then pioneered the study of “sophistication,” which amends Kolmogorov complexity to assign low values both to “simple” objects and “random” ones, and high values only to a third category of objects, which are “neither simple nor random.”  So, Kolmogorov was at the vanguard of the revolution, counter-revolution, and counter-counter-revolution.

But that doesn’t even scratch the surface of his accomplishments: he made fundamental contributions to topology and dynamical systems, and together with Vladimir Arnold, solved Hilbert’s thirteenth problem, showing that any multivariate continuous function can be written as a composition of continuous functions of two variables.  He mentored an awe-inspiring list of young mathematicians, whose names (besides Arnold) include Dobrushin, Dynkin, Gelfand, Martin-Löf, Sinai, and in theoretical computer science, our own Leonid Levin.  If that wasn’t enough, during World War II Kolmogorov applied his mathematical gifts to artillery problems, helping to protect Moscow from German bombardment.

Kolmogorov was private in his personal and political life, which might have had something to do with being gay, at a time and place when that was in no way widely accepted.  From what I’ve read—for example, in Gessen’s biography of Perelman—Kolmogorov seems to have been generally a model of integrity and decency.  He established schools for mathematically gifted children, which became jewels of the Soviet Union; one still reads about them with awe.  And at a time when Soviet mathematics was convulsed by antisemitism—with students of Jewish descent excluded from the top math programs for made-up reasons, sent instead to remote trade schools—Kolmogorov quietly protected Jewish researchers.

OK, but all this leaves a question.  Kolmogorov was a leading and admired Soviet scientist all through the era of Stalin’s purges, the Gulag, the KGB, the murders and disappearances and forced confessions, the show trials, the rewritings of history, the allies suddenly denounced as traitors, the tragicomedy of Lysenkoism.  Anyone as intelligent, individualistic, and morally sensitive as Kolmogorov would obviously have seen through the lies of his government, and been horrified by its brutality.  So then why did he utter nary a word in public against what was happening?

As far as I can tell, the answer is simply: because Kolmogorov knew better than to pick fights he couldn’t win.  He judged that he could best serve the cause of truth by building up an enclosed little bubble of truth, and protecting that bubble from interference by the Soviet system, and even making the bubble useful to the system wherever he could—rather than futilely struggling to reform the system, and simply making martyrs of himself and all his students for his trouble.

There’s a saying of Kolmogorov, which associates wisdom with keeping your mouth shut:

“Every mathematician believes that he is ahead of the others. The reason none state this belief in public is because they are intelligent people.”

There’s also a story that Kolmogorov loved to tell about himself, which presents math as a sort of refuge from the arbitrariness of the world: he said that he once studied to become a historian, but was put off by the fact that historians demanded ten different proofs for the same proposition, whereas in math, a single proof suffices.

There was also a dark side to political quietism.  In 1936, Kolmogorov joined other mathematicians in testifying against his former mentor in the so-called Luzin affair.  By many accounts, he did this because the police blackmailed him, by threatening to reveal his homosexual relationship with Pavel Aleksandrov.  On the other hand, while he was never foolish enough to take on Lysenko directly, Kolmogorov did publish a paper in 1940 courageously supporting Mendelian genetics.


It seems likely that in every culture, there have been truths, which moreover everyone knows to be true on some level, but which are so corrosive to the culture’s moral self-conception that one can’t assert them, or even entertain them seriously, without (in the best case) being ostracized for the rest of one’s life.  In the USSR, those truths were the ones that undermined the entire communist project: for example, that humans are not blank slates; that Mendelian genetics is right; that Soviet collectivized agriculture was a humanitarian disaster.  In our own culture, those truths are—well, you didn’t expect me to say, did you? 🙂

I’ve long been fascinated by the psychology of unspeakable truths.  Like, for any halfway perceptive person in the USSR, there must have been an incredible temptation to make a name for yourself as a daring truth-teller: so much low-hanging fruit!  So much to say that’s correct and important, and that best of all, hardly anyone else is saying!

But then one would think better of it.  It’s not as if, when you speak a forbidden truth, your colleagues and superiors will thank you for correcting their misconceptions.  Indeed, it’s not as if they didn’t already know, on some level, whatever you imagined yourself telling them.  In fact it’s often because they fear you might be right that the authorities see no choice but to make an example of you, lest the heresy spread more widely.  One corollary is that the more reasonably and cogently you make your case, the more you force the authorities’ hand.

But what’s the inner psychology of the authorities?  For some, it probably really is as cynical as the preceding paragraph makes it sound.  But for most, I doubt that.  I think that most authorities simply internalize the ruling ideology so deeply that they equate dissent with sin.  So in particular, the better you can ground your case in empirical facts, the craftier and more conniving a deceiver you become in their eyes, and hence the more virtuous they are for punishing you.  Someone who’s arrived at that point is completely insulated from argument: absent some crisis that makes them reevaluate their entire life, there’s no sense in even trying.  The question of whether or not your arguments have merit won’t even get entered upon, nor will the authority ever be able to repeat back your arguments in a form you’d recognize—for even repeating the arguments correctly could invite accusations of secretly agreeing with them.  Instead, the sole subject of interest will be you: who you think you are, what your motivations were to utter something so divisive and hateful.  And you have as good a chance of convincing authorities of your benign motivations as you’d have of convincing the Inquisition that, sure, you’re a heretic, but the good kind of heretic, the kind who rejects the divinity of Jesus but believes in niceness and tolerance and helping people.  To an Inquisitor, “good heretic” doesn’t parse any better than “round square,” and the very utterance of such a phrase is an invitation to mockery.  If the Inquisition had had Twitter, its favorite sentence would be “I can’t even.”

If it means anything to be a lover of truth, it means that anytime society finds itself stuck in one of these naked-emperor equilibriums—i.e., an equilibrium with certain facts known to nearly everyone, but severe punishments for anyone who tries to make those facts common knowledge—you hope that eventually society climbs its way out.  But crucially, you can hope this while also realizing that, if you tried singlehandedly to change the equilibrium, it wouldn’t achieve anything good for the cause of truth.  If iconoclasts simply throw themselves against a ruling ideology one by one, they can be picked off as easily as tribesmen charging a tank with spears, and each kill will only embolden the tank-gunners still further.  The charging tribesmen don’t even have the assurance that, if truth ultimately does prevail, then they’ll be honored as martyrs: they might instead end up like Ted Nelson babbling about hypertext in 1960, or H.C. Pocklington yammering about polynomial-time algorithms in 1917, nearly forgotten by history for being too far ahead of their time.

Does this mean that, like Winston Smith, the iconoclast simply must accept that 2+2=5, and that a boot will stamp on a human face forever?  No, not at all.  Instead the iconoclast can choose what I think of as the Kolmogorov option.  This is where you build up fortresses of truth in places the ideological authorities don’t particularly understand or care about, like pure math, or butterfly taxonomy, or irregular verbs.  You avoid a direct assault on any beliefs your culture considers necessary for it to operate.  You even seek out common ground with the local enforcers of orthodoxy.  Best of all is a shared enemy, and a way your knowledge and skills might be useful against that enemy.  For Kolmogorov, the shared enemy was the Nazis; for someone today, an excellent choice might be Trump, who’s rightly despised by many intellectual factions that spend most of their time despising each other.  Meanwhile, you wait for a moment when, because of social tectonic shifts beyond your control, the ruling ideology has become fragile enough that truth-tellers acting in concert really can bring it down.  You accept that this moment of reckoning might never arrive, or not in your lifetime.  But even if so, you could still be honored by future generations for building your local pocket of truth, and for not giving falsehood any more aid or comfort than was necessary for your survival.


When it comes to the amount of flak one takes for defending controversial views in public under one’s own name, I defer to almost no one.  For anyone tempted, based on this post, to call me a conformist or coward: how many times have you been denounced online, and from how many different corners of the ideological spectrum?  How many people have demanded your firing?   How many death threats have you received?  How many threatened lawsuits?  How many comments that simply say “kill yourself kike” or similar?  Answer and we can talk about cowardice.

But, yes, there are places even I won’t go, hills I won’t die on.  Broadly speaking:

  • My Law is that, as a scientist, I’ll hold discovering and disseminating the truth to be a central duty of my life, one that overrides almost every other value.  I’ll constantly urge myself to share what I see as the truth, even if it’s wildly unpopular, or makes me look weird, or is otherwise damaging to me.
  • The Amendment to the Law is that I’ll go to great lengths not to hurt anyone else’s feelings: for example, by propagating negative stereotypes, or by saying anything that might discourage any enthusiastic person from entering science.  And if I don’t understand what is or isn’t hurtful, then I’ll defer to the leading intellectuals in my culture to tell me.  This Amendment often overrides the Law, causing me to bite my tongue.
  • The Amendment to the Amendment is that, when pushed, I’ll stand by what I care about—such as free scientific inquiry, liberal Enlightenment norms, humor, clarity, and the survival of the planet and of family and friends and colleagues and nerdy misfits wherever they might be found.  So if someone puts me in a situation where there’s no way to protect what I care about without speaking a truth that hurts someone’s feelings, then I might speak the truth, feelings be damned.  (Even then, though, I’ll try to minimize collateral damage.)

When I see social media ablaze with this or that popular falsehood, I sometimes feel the “Galileo urge” washing over me.  I think: I’m a tenured professor with a semi-popular blog.  How can I look myself in the mirror, if I won’t use my platform and relative job safety to declare to the world, “and yet it moves”?

But then I remember that even Galileo weighed his options and tried hard to be prudent.  In his mind, the Dialogue Concerning the Two Chief World Systems actually represented a compromise (!).  Galileo never declared outright that the earth orbits the sun.  Instead, he put the Copernican doctrine, as a “possible view,” into the mouth of his character Salviati—only to have Simplicio “refute” Salviati, by the final dialogue, with the argument that faith always trumps reason, and that human beings are pathetically unequipped to deduce the plan of God from mere surface appearances.  Then, when that fig-leaf turned out not to be wide enough to fool the Church, Galileo quickly capitulated.  He repented of his error, and agreed never to defend the Copernican heresy again.  And he didn’t, at least not publicly.

Some have called Galileo a coward for that.  But the great David Hilbert held a different view.  Hilbert said that science, unlike religion, has no need for martyrs, because it’s based on facts that can’t be denied indefinitely.  Given that, Hilbert considered Galileo’s response to be precisely correct: in effect Galileo told the Inquisitors, hey, you’re the ones with the torture rack.  Just tell me which way you want it.  I can have the earth orbiting Mars and Venus in figure-eights by tomorrow if you decree it so.

Three hundred years later, Andrey Kolmogorov would say to the Soviet authorities, in so many words: hey, you’re the ones with the Gulag and secret police.  Consider me at your service.  I’ll even help you stop Hitler’s ideology from taking over the world—you’re 100% right about that one, I’ll give you that.  Now as for your own wondrous ideology: just tell me the dogma of the week, and I’ll try to make sure Soviet mathematics presents no threat to it.

There’s a quiet dignity to Kolmogorov’s (and Galileo’s) approach: a dignity that I suspect will be alien to many, but recognizable to those in the business of science.


Comment Policy: I welcome discussion about the responses of Galileo, Kolmogorov, and other historical figures to official ideologies that they didn’t believe in; and about the meta-question of how a truth-valuing person ought to behave when living under such ideologies.  In the hopes of maintaining a civil discussion, any comments that mention current hot-button ideological disputes will be ruthlessly deleted.

30 Jul 18:08

A Compositional Framework for Reaction Networks

by John Baez

For a long time Blake Pollard and I have been working on ‘open’ chemical reaction networks: that is, networks of chemical reactions where some chemicals can flow in from an outside source, or flow out. The picture to keep in mind is something like this:

where the yellow circles are different kinds of chemicals and the aqua boxes are different reactions. The purple dots in the sets X and Y are ‘inputs’ and ‘outputs’, where certain kinds of chemicals can flow in or out.

Here’s our paper on this stuff:

• John Baez and Blake Pollard, A compositional framework for reaction networks, Reviews in Mathematical Physics 29, 1750028.

Blake and I gave talks about this stuff in Luxembourg this June, at a nice conference called Dynamics, thermodynamics and information processing in chemical networks. So, if you’re the sort who prefers talk slides to big scary papers, you can look at those:

• John Baez, The mathematics of open reaction networks.

• Blake Pollard, Black-boxing open reaction networks.

But I want to say here what we do in our paper, because it’s pretty cool, and it took a few years to figure it out. To get things to work, we needed my student Brendan Fong to invent the right category-theoretic formalism: ‘decorated cospans’. But we also had to figure out the right way to think about open dynamical systems!

In the end, we figured out how to first ‘gray-box’ an open reaction network, converting it into an open dynamical system, and then ‘black-box’ it, obtaining the relation between input and output flows and concentrations that holds in steady state. The first step extracts the dynamical behavior of an open reaction network; the second extracts its static behavior. And both these steps are functors!

Lawvere had the idea that the process of assigning ‘meaning’ to expressions could be seen as a functor. This idea has caught on in theoretical computer science: it’s called ‘functorial semantics’. So, what we’re doing here is applying functorial semantics to chemistry.

Now Blake has passed his thesis defense based on this work, and he just needs to polish up his thesis a little before submitting it. This summer he’s doing an internship at the Princeton branch of the engineering firm Siemens. He’s working with Arquimedes Canedo on ‘knowledge representation’.

But I’m still eager to dig deeper into open reaction networks. They’re a small but nontrivial step toward my dream of a mathematics of living systems. My working hypothesis is that living systems seem ‘messy’ to physicists because they operate at a higher level of abstraction. That’s what I’m trying to explore.

Here’s the idea of our paper.

The idea

Reaction networks are a very general framework for describing processes where entities interact and transform int other entities. While they first showed up in chemistry, and are often called ‘chemical reaction networks’, they have lots of other applications. For example, a basic model of infectious disease, the ‘SIRS model’, is described by this reaction network:

S + I \stackrel{\iota}{\longrightarrow} 2 I  \qquad  I \stackrel{\rho}{\longrightarrow} R \stackrel{\lambda}{\longrightarrow} S

We see here three types of entity, called species:

S: susceptible,
I: infected,
R: resistant.

We also have three `reactions’:

\iota : S + I \to 2 I: infection, in which a susceptible individual meets an infected one and becomes infected;
\rho : I \to R: recovery, in which an infected individual gains resistance to the disease;
\lambda : R \to S: loss of resistance, in which a resistant individual becomes susceptible.

In general, a reaction network involves a finite set of species, but reactions go between complexes, which are finite linear combinations of these species with natural number coefficients. The reaction network is a directed graph whose vertices are certain complexes and whose edges are called reactions.

If we attach a positive real number called a rate constant to each reaction, a reaction network determines a system of differential equations saying how the concentrations of the species change over time. This system of equations is usually called the rate equation. In the example I just gave, the rate equation is

\begin{array}{ccl} \displaystyle{\frac{d S}{d t}} &=& r_\lambda R - r_\iota S I \\ \\ \displaystyle{\frac{d I}{d t}} &=&  r_\iota S I - r_\rho I \\  \\ \displaystyle{\frac{d R}{d t}} &=& r_\rho I - r_\lambda R \end{array}

Here r_\iota, r_\rho and r_\lambda are the rate constants for the three reactions, and S, I, R now stand for the concentrations of the three species, which are treated in a continuum approximation as smooth functions of time:

S, I, R: \mathbb{R} \to [0,\infty)

The rate equation can be derived from the law of mass action, which says that any reaction occurs at a rate equal to its rate constant times the product of the concentrations of the species entering it as inputs.

But a reaction network is more than just a stepping-stone to its rate equation! Interesting qualitative properties of the rate equation, like the existence and uniqueness of steady state solutions, can often be determined just by looking at the reaction network, regardless of the rate constants. Results in this direction began with Feinberg and Horn’s work in the 1960’s, leading to the Deficiency Zero and Deficiency One Theorems, and more recently to Craciun’s proof of the Global Attractor Conjecture.

In our paper, Blake and I present a ‘compositional framework’ for reaction networks. In other words, we describe rules for building up reaction networks from smaller pieces, in such a way that its rate equation can be figured out knowing those those of the pieces. But this framework requires that we view reaction networks in a somewhat different way, as ‘Petri nets’.

Petri nets were invented by Carl Petri in 1939, when he was just a teenager, for the purposes of chemistry. Much later, they became popular in theoretical computer science, biology and other fields. A Petri net is a bipartite directed graph: vertices of one kind represent species, vertices of the other kind represent reactions. The edges into a reaction specify which species are inputs to that reaction, while the edges out specify its outputs.

You can easily turn a reaction network into a Petri net and vice versa. For example, the reaction network above translates into this Petri net:

Beware: there are a lot of different names for the same thing, since the terminology comes from several communities. In the Petri net literature, species are called places and reactions are called transitions. In fact, Petri nets are sometimes called ‘place-transition nets’ or ‘P/T nets’. On the other hand, chemists call them ‘species-reaction graphs’ or ‘SR-graphs’. And when each reaction of a Petri net has a rate constant attached to it, it is often called a ‘stochastic Petri net’.

While some qualitative properties of a rate equation can be read off from a reaction network, others are more easily read from the corresponding Petri net. For example, properties of a Petri net can be used to determine whether its rate equation can have multiple steady states.

Petri nets are also better suited to a compositional framework. The key new concept is an ‘open’ Petri net. Here’s an example:

The box at left is a set X of ‘inputs’ (which happens to be empty), while the box at right is a set Y of ‘outputs’. Both inputs and outputs are points at which entities of various species can flow in or out of the Petri net. We say the open Petri net goes from X to Y. In our paper, we show how to treat it as a morphism f : X \to Y in a category we call \textrm{RxNet}.

Given an open Petri net with rate constants assigned to each reaction, our paper explains how to get its ‘open rate equation’. It’s just the usual rate equation with extra terms describing inflows and outflows. The above example has this open rate equation:

\begin{array}{ccr} \displaystyle{\frac{d S}{d t}} &=&  - r_\iota S I - o_1 \\ \\ \displaystyle{\frac{d I}{d t}} &=&  r_\iota S I - o_2  \end{array}

Here o_1, o_2 : \mathbb{R} \to \mathbb{R} are arbitrary smooth functions describing outflows as a function of time.

Given another open Petri net g: Y \to Z, for example this:

it will have its own open rate equation, in this case

\begin{array}{ccc} \displaystyle{\frac{d S}{d t}} &=& r_\lambda R + i_2 \\ \\ \displaystyle{\frac{d I}{d t}} &=& - r_\rho I + i_1 \\  \\ \displaystyle{\frac{d R}{d t}} &=& r_\rho I - r_\lambda R  \end{array}

Here i_1, i_2: \mathbb{R} \to \mathbb{R} are arbitrary smooth functions describing inflows as a function of time. Now for a tiny bit of category theory: we can compose f and g by gluing the outputs of f to the inputs of g. This gives a new open Petri net gf: X \to Z, as follows:

But this open Petri net gf has an empty set of inputs, and an empty set of outputs! So it amounts to an ordinary Petri net, and its open rate equation is a rate equation of the usual kind. Indeed, this is the Petri net we have already seen.

As it turns out, there’s a systematic procedure for combining the open rate equations for two open Petri nets to obtain that of their composite. In the example we’re looking at, we just identify the outflows of f with the inflows of g (setting i_1 = o_1 and i_2 = o_2) and then add the right hand sides of their open rate equations.

The first goal of our paper is to precisely describe this procedure, and to prove that it defines a functor

\diamond: \textrm{RxNet} \to \textrm{Dynam}

from \textrm{RxNet} to a category \textrm{Dynam} where the morphisms are ‘open dynamical systems’. By a dynamical system, we essentially mean a vector field on \mathbb{R}^n, which can be used to define a system of first-order ordinary differential equations in n variables. An example is the rate equation of a Petri net. An open dynamical system allows for the possibility of extra terms that are arbitrary functions of time, such as the inflows and outflows in an open rate equation.

In fact, we prove that \textrm{RxNet} and \textrm{Dynam} are symmetric monoidal categories and that d is a symmetric monoidal functor. To do this, we use Brendan Fong’s theory of ‘decorated cospans’.

Decorated cospans are a powerful general tool for describing open systems. A cospan in any category is just a diagram like this:

We are mostly interested in cospans in \mathrm{FinSet}, the category of finite sets and functions between these. The set S, the so-called apex of the cospan, is the set of states of an open system. The sets X and Y are the inputs and outputs of this system. The legs of the cospan, meaning the morphisms i: X \to S and o: Y \to S, describe how these inputs and outputs are included in the system. In our application, S is the set of species of a Petri net.

For example, we may take this reaction network:

A+B \stackrel{\alpha}{\longrightarrow} 2C \quad \quad C \stackrel{\beta}{\longrightarrow} D

treat it as a Petri net with S = \{A,B,C,D\}:

and then turn that into an open Petri net by choosing any finite sets X,Y and maps i: X \to S, o: Y \to S, for example like this:

(Notice that the maps including the inputs and outputs into the states of the system need not be one-to-one. This is technically useful, but it introduces some subtleties that I don’t feel like explaining right now.)

An open Petri net can thus be seen as a cospan of finite sets whose apex S is ‘decorated’ with some extra information, namely a Petri net with S as its set of species. Fong’s theory of decorated cospans lets us define a category with open Petri nets as morphisms, with composition given by gluing the outputs of one open Petri net to the inputs of another.

We call the functor

\diamond: \textrm{RxNet} \to \textrm{Dynam}

gray-boxing because it hides some but not all the internal details of an open Petri net. (In the paper we draw it as a gray box, but that’s too hard here!)

We can go further and black-box an open dynamical system. This amounts to recording only the relation between input and output variables that must hold in steady state. We prove that black-boxing gives a functor

\square: \textrm{Dynam} \to \mathrm{SemiAlgRel}

(yeah, the box here should be black, and in our paper it is). Here \mathrm{SemiAlgRel} is a category where the morphisms are semi-algebraic relations between real vector spaces, meaning relations defined by polynomials and inequalities. This relies on the fact that our dynamical systems involve algebraic vector fields, meaning those whose components are polynomials; more general dynamical systems would give more general relations.

That semi-algebraic relations are closed under composition is a nontrivial fact, a spinoff of the Tarski–Seidenberg theorem. This says that a subset of \mathbb{R}^{n+1} defined by polynomial equations and inequalities can be projected down onto \mathbb{R}^n, and the resulting set is still definable in terms of polynomial identities and inequalities. This wouldn’t be true if we didn’t allow inequalities. It’s neat to see this theorem, important in mathematical logic, showing up in chemistry!

Structure of the paper

Okay, now you’re ready to read our paper! Here’s how it goes:

In Section 2 we review and compare reaction networks and Petri nets. In Section 3 we construct a symmetric monoidal category \textrm{RNet} where an object is a finite set and a morphism is an open reaction network (or more precisely, an isomorphism class of open reaction networks). In Section 4 we enhance this construction to define a symmetric monoidal category \textrm{RxNet} where the transitions of the open reaction networks are equipped with rate constants. In Section 5 we explain the open dynamical system associated to an open reaction network, and in Section 6 we construct a symmetric monoidal category \textrm{Dynam} of open dynamical systems. In Section 7 we construct the gray-boxing functor

\diamond: \textrm{RxNet} \to \textrm{Dynam}

In Section 8 we construct the black-boxing functor

\square: \textrm{Dynam} \to \mathrm{SemiAlgRel}

We show both of these are symmetric monoidal functors.

Finally, in Section 9 we fit our results into a larger ‘network of network theories’. This is where various results in various papers I’ve been writing in the last few years start assembling to form a big picture! But this picture needs to grow….


27 Jul 23:15

Observing a quantum Maxwell demon at work [Physics]

by Nathanael Cottet, Sebastien Jezouin, Landry Bretheau, Philippe Campagne–Ibarcq, Quentin Ficheux, Janet Anders, Alexia Auffeves, Remi Azouit, Pierre Rouchon, Benȷamin Huard
In apparent contradiction to the laws of thermodynamics, Maxwell’s demon is able to cyclically extract work from a system in contact with a thermal bath, exploiting the information about its microstate. The resolution of this paradox required the insight that an intimate relationship exists between information and thermodynamics. Here, we...
27 Jul 22:44

The great misunderstanding about peer review and the nature of scientific facts

by admin

Last week I organized a workshop on the future of academic publication. My point was that our current system, based on private pre-publication peer review, is archaic. I noted that the way the peer review system is currently organized (where external reviewers judge both the quality of the science and the interest for the journal) represents just a few decades in the history of science. It can hardly qualify as the way science is or should be done. It is a historical feature. For example, only one of Einstein’s papers was formally peer-reviewed; Crick & Watson’s DNA paper was not formally peer-reviewed. Many journals introduced external peer review in the 1960s or 1970s to deal with the growth in the number and variety of submissions (see e.g. Baldwin, 2015); before that, editors would decide whether to publish the papers they received, depending on the number of pages they could print.

Given the possibilities that offers the internet, it seems that there is no reason anymore to couple the two current roles of peer review: editorial selection and scientific discussion. One could simply share their work online, get feedback from the community to discuss the work, and then let people recommend papers to their colleagues and compile all sorts of reader’s digests. No time wasted in multiple submissions, no prestige misattributed to publications in glamour journals, who do not do a better a job than any other journal at pointing errors and frauds. Just the science and the public discussion of science.

But there is a lot of resistance to this idea, namely the idea that papers should be formally approved by peer reviewers before they are published. Because otherwise, so many people claim, the scientific world would be polluted by all sorts of unverified claims. It would not be science anymore, just gossip. I have attributed this attitude to conservatism, first because as noted above this system is a rather recent addition to the scientific enterprise, and second because papers are published before peer review. We call those “preprints”, but really these are scientific papers made public, so by definition they are published. I follow the preprints in my field and I don’t see any particular loss in quality.

However, I think I was missing a key element. The more profound reason why many people, in particular experimental biologists, are so attached to peer review is in my view that they hold naive philosophical views about the notion of truth in science. A paper should be peer-reviewed because otherwise you can’t cite it as a true fact. Peer review validates science, thanks to experts who make sure that the claims of the authors are actually true. Of course it can go wrong and reviewers might miss something, but it is the purpose of peer review. This view is reflected in the tendency, especially in biology journals, to choose titles that look like established truths: “Hunger is controlled by HGRase”, instead of “The molecular control of hunger”. Scientists and journalists can then write revealed truths with a verse reference, such as “Hunger is controlled by HGRase (McDonald et al., 2017)”.

The great misunderstanding is that truth is a notion that applies to logical propositions (for example, mathematical theorems), not to empirical claims. This has been well argued by Popper, for example. Truth is by nature a theoretical concept. Everything said is said with words, and in this sense it always refers to theoretical concepts. One can only judge whether observations are congruent with the meaning attributed to the words, and that meaning necessarily has a theoretical nature. There is no such thing as an “established fact”. This is so even of what we might consider as direct observations. Take for example the claim “The resting potential of neurons is -70 mV”. This is a theoretical statement. Why? First, because to establish it, I have recorded a number of neurons. If you test it, it will be on a different neuron, which I have not measured. So I am making a theoretical claim. Probably, I also tested my neurons with a particular method (not mentioning a particular region and species). But my claim makes no reference to the method by which I have made the inference. That would be the “methods” part of my paper, not the conclusion, and when you cite my paper, you will cite it because of the conclusion, the “established fact”, you will not be referring to the methods, which you consider are the means to establish the fact. It is the role of the reviewers to check the methods, to check that they do establish the fact.

But these are trivial remarks. It is not just that the method matters. The very notion of an observation always implicitly relies on a theoretical background. When I say that the resting potential is -70 mV, I mean that there is a potential difference of -70 mV across the membrane. But that’s not what I measure. I measure the difference in potential between some point outside the cell and the inside of a patch pipette whose solution is in contact with the cell’s inside. So I am assuming the potential is the same in all points of the cytosol, even though I have not tested it. I am also implicitly modeling the cytosol as a solution, even though the reality is more complex than that, given the mass of charged proteins in it. I am assuming that the extracellular potential is constant. I am assuming that my pipette solution reasonably matches the actual cytosol solution, given that “solution” is only a convenient model. I am implicitly making all sorts of theoretical assumptions, which have a lot of empirical support but are still of a theoretical nature.

I have tried with this example to show that even a very simple “fact” is actually a theoretical proposition, with many layers of assumptions. But of course in general, papers typically make claims that rely less firmly on accepted theoretical grounds, since they must be “novel”. So it is never the case that a paper definitely proves its conclusions. Because conclusions have a theoretical nature, all that can be checked is whether observations are consistent with the authors’ interpretation.

So the goal of peer review can’t be to establish the truth. If it were the case, then why would reviewers ever disagree? They disagree because they cannot actually judge whether a claim is true; they can only say whether they are personally convinced. This makes the current peer review system extremely poor, because all the information we get is: two anonymous people were convinced (and maybe others were not, but we’ll never find out). What would be more useful would be to have an open public discussion, with criticisms, qualifications and alternative interpretations fully disclosed for anyone to read and make their own opinion. In such a system, the notion of a stamp of approval on a paper would simply be absurd; why hide the disapprovals? There is the paper, and there is the scientific discussion of the paper, and that is all there needs to be.

There is some concern these days that peer reviewed research is unreliable. Well, science is unreliable. That is almost what defines it: it can be criticized and revised. Seeing peer review as the system that establishes the scientific truth is not only a historical error, it is a great philosophical error, and a dangerous bureaucratic view of science. We don’t need editorial decisions based on peer review. We need free publication (we have it) and we need open scientific discussion (it’s coming). That’s all we need.

27 Jul 21:57

On the universality of the incompressible Euler equation on compact manifolds

by Terence Tao

I’ve just uploaded to the arXiv my paper “On the universality of the incompressible Euler equation on compact manifolds“, submitted to Discrete and Continuous Dynamical Systems. This is a variant of my recent paper on the universality of potential well dynamics, but instead of trying to embed dynamical systems into a potential well {\partial_{tt} u = -\nabla V(u)}, here we try to embed dynamical systems into the incompressible Euler equations

\displaystyle  \partial_t u + \nabla_u u = - \mathrm{grad}_g p \ \ \ \ \ (1)

\displaystyle  \mathrm{div}_g u = 0

on a Riemannian manifold {(M,g)}. (One is particularly interested in the case of flat manifolds {M}, particularly {{\bf R}^3} or {({\bf R}/{\bf Z})^3}, but for the main result of this paper it is essential that one is permitted to consider curved manifolds.) This system, first studied by Ebin and Marsden, is the natural generalisation of the usual incompressible Euler equations to curved space; it can be viewed as the formal geodesic flow equation on the infinite-dimensional manifold of volume-preserving diffeomorphisms on {M} (see this previous post for a discussion of this in the flat space case).

The Euler equations can be viewed as a nonlinear equation in which the nonlinearity is a quadratic function of the velocity field {u}. It is thus natural to compare the Euler equations with quadratic ODE of the form

\displaystyle  \partial_t y = B(y,y) \ \ \ \ \ (2)

where {y: {\bf R} \rightarrow {\bf R}^n} is the unknown solution, and {B: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n} is a bilinear map, which we may assume without loss of generality to be symmetric. One can ask whether such an ODE may be linearly embedded into the Euler equations on some Riemannian manifold {(M,g)}, which means that there is an injective linear map {U: {\bf R}^n \rightarrow \Gamma(TM)} from {{\bf R}^n} to smooth vector fields on {M}, as well as a bilinear map {P: {\bf R}^n \times {\bf R}^n \rightarrow C^\infty(M)} to smooth scalar fields on {M}, such that the map {y \mapsto (U(y), P(y,y))} takes solutions to (2) to solutions to (1), or equivalently that

\displaystyle  U(B(y,y)) + \nabla_{U(y)} U(y) = - \mathrm{grad}_g P(y,y)

\displaystyle  \mathrm{div}_g U(y) = 0

for all {y \in {\bf R}^n}.

For simplicity let us restrict {M} to be compact. There is an obvious necessary condition for this embeddability to occur, which comes from energy conservation law for the Euler equations; unpacking everything, this implies that the bilinear form {B} in (2) has to obey a cancellation condition

\displaystyle  \langle B(y,y), y \rangle = 0 \ \ \ \ \ (3)

for some positive definite inner product {\langle, \rangle: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}} on {{\bf R}^n}. The main result of the paper is the converse to this statement: if {B} is a symmetric bilinear form obeying a cancellation condition (3), then it is possible to embed the equations (2) into the Euler equations (1) on some Riemannian manifold {(M,g)}; the catch is that this manifold will depend on the form {B} and on the dimension {n} (in fact in the construction I have, {M} is given explicitly as {SO(n) \times ({\bf R}/{\bf Z})^{n+1}}, with a funny metric on it that depends on {B}).

As a consequence, any finite dimensional portion of the usual “dyadic shell models” used as simplified toy models of the Euler equation, can actually be embedded into a genuine Euler equation, albeit on a high-dimensional and curved manifold. This includes portions of the self-similar “machine” I used in a previous paper to establish finite time blowup for an averaged version of the Navier-Stokes (or Euler) equations. Unfortunately, the result in this paper does not apply to infinite-dimensional ODE, so I cannot yet establish finite time blowup for the Euler equations on a (well-chosen) manifold. It does not seem so far beyond the realm of possibility, though, that this could be done in the relatively near future. In particular, the result here suggests that one could construct something resembling a universal Turing machine within an Euler flow on a manifold, which was one ingredient I would need to engineer such a finite time blowup.

The proof of the main theorem proceeds by an “elimination of variables” strategy that was used in some of my previous papers in this area, though in this particular case the Nash embedding theorem (or variants thereof) are not required. The first step is to lessen the dependence on the metric {g} by partially reformulating the Euler equations (1) in terms of the covelocity {g \cdot u} (which is a {1}-form) instead of the velocity {u}. Using the freedom to modify the dimension of the underlying manifold {M}, one can also decouple the metric {g} from the volume form that is used to obtain the divergence-free condition. At this point the metric can be eliminated, with a certain positive definiteness condition between the velocity and covelocity taking its place. After a substantial amount of trial and error (motivated by some “two-and-a-half-dimensional” reductions of the three-dimensional Euler equations, and also by playing around with a number of variants of the classic “separation of variables” strategy), I eventually found an ansatz for the velocity and covelocity that automatically solved most of the components of the Euler equations (as well as most of the positive definiteness requirements), as long as one could find a number of scalar fields that obeyed a certain nonlinear system of transport equations, and also obeyed a positive definiteness condition. Here I was stuck for a bit because the system I ended up with was overdetermined – more equations than unknowns. After trying a number of special cases I eventually found a solution to the transport system on the sphere, except that the scalar functions sometimes degenerated and so the positive definiteness property I wanted was only obeyed with positive semi-definiteness. I tried for some time to perturb this example into a strictly positive definite solution before eventually working out that this was not possible. Finally I had the brainwave to lift the solution from the sphere to an even more symmetric space, and this quickly led to the final solution of the problem, using the special orthogonal group rather than the sphere as the underlying domain. The solution ended up being rather simple in form, but it is still somewhat miraculous to me that it exists at all; in retrospect, given the overdetermined nature of the problem, relying on a large amount of symmetry to cut down the number of equations was basically the only hope.


Filed under: math.AP, math.DS, math.MG, paper Tagged: Euler equations, universality
25 Jul 23:37

There's Always Money In The Banana Stand For "Capitalism"

by noreply@blogger.com (Atrios)
Why I obsess about this stuff...

Proponents of self-driving cars say they'll make the world safer, but autonomous vehicles need to predict what bicyclists are going to do. Now researchers say part of the answer is to have bikes feed information to cars.

Just follow this logically about what they think their toys will require....

25 Jul 20:10

Correlated Equilibria in Game Theory

by John Baez

Erica Klarreich is one of the few science journalists who explains interesting things I don’t already know clearly enough so I can understand them. I recommend her latest article:

• Erica Klarreich, In game theory, no clear path to equilibrium, Quanta, 18 July 2017.

Economists like the concept of ‘Nash equilibrium’, but it’s problematic in some ways. This matters for society at large.

In a Nash equilibrium for a multi-player game, no player can improve their payoff by unilaterally changing their strategy. This doesn’t mean everyone is happy: it’s possible to be trapped in a Nash equilibrium where everyone is miserable, because anyone changing their strategy unilaterally would be even more miserable. (Think ‘global warming’.)

The great thing about Nash equilibria is that their meaning is easy to fathom, and they exist. John Nash won a Nobel prize for a paper proving that they exist. His paper was less than one page long. But he proved the existence of Nash equilibria for arbitrary multi-player games using a nonconstructive method: a fixed point theorem that doesn’t actually tell you how to find the equilibrium!

Given this, it’s not surprising that Nash equilibria can be hard to find. Last September a paper came out making this precise, in a strong way:

• Yakov Babichenko and Aviad Rubinstein, Communication complexity of approximate Nash equilibria.

The authors show there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other almost everything about their preferences. This makes finding the Nash equilibrium prohibitively difficult to find when there are lots of players… in general. There are particular games where it’s not difficult, and that makes these games important: for example, if you’re trying to run a government well. (A laughable notion these days, but still one can hope.)

Klarreich’s article in Quanta gives a nice readable account of this work and also a more practical alternative to the concept of Nash equilibrium. It’s called a ‘correlated equilibrium’, and it was invented by the mathematician Robert Aumann in 1974. You can see an attempt to define it here:

• Wikipedia, Correlated equilibrium.

The precise mathematical definition near the start of this article is a pretty good example of how you shouldn’t explain something: it contains a big fat equation containing symbols not mentioned previously, and so on. By thinking about it for a while, I was able to fight my way through it. Someday I should improve it—and someday I should explain the idea here! But for now, I’ll just quote this passage, which roughly explains the idea in words:

The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy (assuming the others don’t deviate), the distribution is called a correlated equilibrium.

According to Erica Klarreich it’s a useful notion. She even makes it sound revolutionary:

This might at first sound like an arcane construct, but in fact we use correlated equilibria all the time—whenever, for example, we let a coin toss decide whether we’ll go out for Chinese or Italian, or allow a traffic light to dictate which of us will go through an intersection first.

In [some] examples, each player knows exactly what advice the “mediator” is giving to the other player, and the mediator’s advice essentially helps the players coordinate which Nash equilibrium they will play. But when the players don’t know exactly what advice the others are getting—only how the different kinds of advice are correlated with each other—Aumann showed that the set of correlated equilibria can contain more than just combinations of Nash equilibria: it can include forms of play that aren’t Nash equilibria at all, but that sometimes result in a more positive societal outcome than any of the Nash equilibria. For example, in some games in which cooperating would yield a higher total payoff for the players than acting selfishly, the mediator can sometimes beguile players into cooperating by withholding just what advice she’s giving the other players. This finding, Myerson said, was “a bolt from the blue.”

(Roger Myerson is an economics professor at the University of Chicago who won a Nobel prize for his work on game theory.)

And even though a mediator can give many different kinds of advice, the set of correlated equilibria of a game, which is represented by a collection of linear equations and inequalities, is more mathematically tractable than the set of Nash equilibria. “This other way of thinking about it, the mathematics is so much more beautiful,” Myerson said.

While Myerson has called Nash’s vision of game theory “one of the outstanding intellectual advances of the 20th century,” he sees correlated equilibrium as perhaps an even more natural concept than Nash equilibrium. He has opined on numerous occasions that “if there is intelligent life on other planets, in a majority of them they would have discovered correlated equilibrium before Nash equilibrium.”

When it comes to repeated rounds of play, many of the most natural ways that players could choose to adapt their strategies converge, in a particular sense, to correlated equilibria. Take, for example, “regret minimization” approaches, in which before each round, players increase the probability of using a given strategy if they regret not having played it more in the past. Regret minimization is a method “which does bear some resemblance to real life — paying attention to what’s worked well in the past, combined with occasionally experimenting a bit,” Roughgarden said.

(Tim Roughgarden is a theoretical computer scientist at Stanford University.)

For many regret-minimizing approaches, researchers have shown that play will rapidly converge to a correlated equilibrium in the following surprising sense: after maybe 100 rounds have been played, the game history will look essentially the same as if a mediator had been advising the players all along. It’s as if “the [correlating] device was somehow implicitly found, through the interaction,” said Constantinos Daskalakis, a theoretical computer scientist at the Massachusetts Institute of Technology.

As play continues, the players won’t necessarily stay at the same correlated equilibrium — after 1,000 rounds, for instance, they may have drifted to a new equilibrium, so that now their 1,000-game history looks as if it had been guided by a different mediator than before. The process is reminiscent of what happens in real life, Roughgarden said, as societal norms about which equilibrium should be played gradually evolve.

In the kinds of complex games for which Nash equilibrium is hard to reach, correlated equilibrium is “the natural leading contender” for a replacement solution concept, Nisan said.

As Klarreich hints, you can find correlated equilibria using a technique called linear programming. That was proved here, I think:

• Christos H. Papadimitriou and Tim Roughgarden, Computing correlated equilibria in multi-player games, J. ACM 55 (2008), 14:1-14:29.

Do you know something about correlated equilibria that I should know? If so, please tell me!


19 Jul 18:48

Immunology: Nervous crosstalk to make antibodies

by Hai Qi

Immunology: Nervous crosstalk to make antibodies

Nature 547, 7663 (2017). doi:10.1038/nature23097

Authors: Hai Qi

Immune cells called T cells help immune-system B cells mature to produce antibodies. This entails signalling between cells using the molecule dopamine — a surprising immunological role for this neurotransmitter. See Article p.318

19 Jul 18:46

The strange topology that is reshaping physics

by Davide Castelvecchi

The strange topology that is reshaping physics

Nature 547, 7663 (2017). http://www.nature.com/doifinder/10.1038/547272a

Author: Davide Castelvecchi

Topological effects might be hiding inside perfectly ordinary materials, waiting to reveal bizarre new particles or bolster quantum computing.

15 Jul 16:58

copacetic

Merriam-Webster's Word of the Day for July 15, 2017 is:

copacetic • \koh-puh-SET-ik\  • adjective

: very satisfactory

Examples:

"... if you're going to be traveling with us it just wouldn't look too copacetic for you to be carrying that ratty old bag." — Christopher Paul Curtis, Bud, Not Buddy, 1999

"In terms of living standards we're now back to where we started which while not making us entirely copacetic is at least better than not having recovered as yet." — Tim Worstall, Forbes, 8 Aug. 2016

Did you know?

Theories about the origin of copacetic abound, but the facts about the word’s history are scant: it appears to have arisen in African-American slang in the southern U.S., possibly as early as the 1880s, with earliest known evidence of it in print dating only to 1919. Beyond that, we have only speculation. One theory is that the term is descended from Hebrew kol be sedher (or kol b’seder or chol b’seder), meaning “everything is in order.” That theory is problematic for a number of reasons, among them that in order for a Hebrew expression to have been adopted into English at that time it would have passed through Yiddish, and there is no evidence of the phrase in Yiddish dictionaries. Other theories trace copacetic to Creole coupèstique (“able to be coped with”), Italian cappo sotto (literally “head under,” figuratively “okay”), or Chinook jargon copacete (“everything’s all right”), but no evidence to substantiate any of these has been found. Another theory credits the coining of the word to Bill "Bojangles" Robinson, who used the word frequently and believed himself to be the coiner. Anecdotal recollections of the word’s use, however, predate his lifetime.



15 Jul 16:25

Suggestions for good notation

by Richard Borcherds

I occasionally come across a new piece of notation so good that it makes life easier by giving a better way to look at something. Some examples:

  • Iverson introduced the notation [X] to mean 1 if X is true and 0 otherwise; so for example Σ1≤n<x [n prime] is the number of primes less than x, and the unmemorable and confusing Kronecker delta function δn becomes [n=0]. (A similar convention is used in the C programming language.)

  • The function taking x to x sin(x) can be denoted by x ↦ x sin(x). This has the same meaning as the lambda calculus notation λx.x sin(x) but seems easier to understand and use, and is less confusing than the usual convention of just writing x sin(x), which is ambiguous: it could also stand for a number.

  • I find calculations with Homs and ⊗ easier to follow if I write Hom(A,B) as A→B. Similarly writing BA for the set of functions from A to B is really confusing, and I find it much easier to write this set as A→B.

  • Conway's notation for orbifolds almost trivializes the classification of wallpaper groups.

Has anyone come across any more similar examples of good notation that should be better known? (Excluding standard well known examples such as commutative diagrams, Hindu-Arabic numerals, etc.)

05 Jul 15:03

Weevil

by Minnesotastan
"A red palm weevil appears to strike a pugilistic stance, with its unusual antennae raised on either side of its elongated head." Macro photography specialist Javier Rupérez photographed the weevil in in his hometown of Almáchar, Spain
One of the Pictures of the Day at The Telegraph.
30 Jun 21:26

Breaking Lorentz reciprocity to overcome the time-bandwidth limit in physics and engineering

by Tsakmakidis, K. L., Shen, L., Schulz, S. A., Zheng, X., Upham, J., Deng, X., Altug, H., Vakakis, A. F., Boyd, R. W.

A century-old tenet in physics and engineering asserts that any type of system, having bandwidth , can interact with a wave over only a constrained time period t inversely proportional to the bandwidth (t· ~ 2). This law severely limits the generic capabilities of all types of resonant and wave-guiding systems in photonics, cavity quantum electrodynamics and optomechanics, acoustics, continuum mechanics, and atomic and optical physics but is thought to be completely fundamental, arising from basic Fourier reciprocity. We propose that this "fundamental" limit can be overcome in systems where Lorentz reciprocity is broken. As a system becomes more asymmetric in its transport properties, the degree to which the limit can be surpassed becomes greater. By way of example, we theoretically demonstrate how, in an astutely designed magnetized semiconductor heterostructure, the above limit can be exceeded by orders of magnitude by using realistic material parameters. Our findings revise prevailing paradigms for linear, time-invariant resonant systems, challenging the doctrine that high-quality resonances must invariably be narrowband and providing the possibility of developing devices with unprecedentedly high time-bandwidth performance.