Shared posts

04 Jan 14:06

Gradient descent

by Qiaochu Yuan

Note: this is a repost of a Facebook status I wrote off the cuff about a year ago, lightly edited. As such it has a different style from my other posts, but I still wanted to put it somewhere where it’d be easier to find and share than Facebook. 

Gradient descent, in its simplest where you just subtract the gradient of your loss function J, is not dimensionally consistent: if the parameters you’re optimizing over have units of length, and the loss function is dimensionless, then the derivatives you’re subtracting have units of inverse length.

This observation can be used to reinvent the learning rate, which, for dimensional consistency, must have units of length squared. It also suggests that the learning rate ought to be set to something like L^2 for some kind of characteristic length scale L, which loosely speaking is the length at which the curvature of J starts to matter.

It might also make sense to give different parameters different units, which suggests furthermore that one might want a different learning rate for each parameter, or at least that one might want to partition the parameters into different subsets and choose different learning rates for each.

Going much further, from an abstract coordinate-free point of view the extra information you need to compute the gradient of a smooth function is a choice of (pseudo-)Riemannian metric on parameter space, which if you like is a gigantic hyperparameter you can try to optimize. Concretely this amounts to a version of preconditioned gradient descent where you allow yourself to multiply the gradient (in the coordinate-dependent sense) by a symmetric (invertible, ideally positive definite) matrix which is allowed to depend on the parameters. In the first paragraph this matrix was a constant scalar multiple of identity and in the third paragraph this matrix was constant diagonal.

This is an extremely general form of gradient descent, general enough to be equivariant under arbitrary smooth change of coordinates: that is, if you do this form of gradient descent and then apply a diffeomorphism to parameter space, you are still doing this form of gradient descent, with a different metric. For example, if you pick the preconditioning matrix to be the inverse Hessian (in the usual sense, assuming it’s invertible), you recover Newton’s method. This corresponds to choosing the metric at each point to be given by the Hessian (in the usual sense), which is the choice that makes the Hessian (in the coordinate-free sense) equal to the identity. This is a precise version of “the length at which the curvature of J starts to matter” and in principle ameliorates the problem where gradient descent performs poorly in narrow valleys (regions where the Hessian (in the usual sense) is poorly conditioned), at least up to cubic and higher order effects.

In general it’s expensive to compute the inverse Hessian, so a more practical thing to do is to use a matrix which approximates it in some sense. And now we’re well on the way towards quasi-Newton methods

13 Mar 06:56

Quantum information in the Posner model of quantum cognition. (arXiv:1711.04801v3 [quant-ph] UPDATED)

by Nicole Yunger Halpern, Elizabeth Crosson

Matthew Fisher recently postulated a mechanism by which quantum phenomena could influence cognition: Phosphorus nuclear spins may resist decoherence for long times, especially when in Posner molecules. The spins would serve as biological qubits. We imagine that Fisher postulates correctly. How adroitly could biological systems process quantum information (QI)? We establish a framework for answering. Additionally, we construct applications of biological qubits to quantum error correction, quantum communication, and quantum computation. First, we posit how the QI encoded by the spins transforms as Posner molecules form. The transformation points to a natural computational basis for qubits in Posner molecules. From the basis, we construct a quantum code that detects arbitrary single-qubit errors. Each molecule encodes one qutrit. Shifting from information storage to computation, we define the model of Posner quantum computation. To illustrate the model's quantum-communication ability, we show how it can teleport information incoherently: A state's weights are teleported. Dephasing results from the entangling operation's simulation of a coarse-grained Bell measurement. Whether Posner quantum computation is universal remains an open question. However, the model's operations can efficiently prepare a Posner state usable as a resource in universal measurement-based quantum computation. The state results from deforming the Affleck-Kennedy-Lieb-Tasaki (AKLT) state and is a projected entangled-pair state (PEPS). Finally, we show that entanglement can affect molecular-binding rates, boosting a binding probability from 33.6% to 100% in an example. This work opens the door for the QI-theoretic analysis of biological qubits and Posner molecules.

09 Mar 14:10

A self-contained, brief and complete formulation of Voevodsky’s univalence axiom

by Martin Escardo

I have often seen competent mathematicians and logicians, outside our circle, making technically erroneous comments about the univalence axiom, in conversations, in talks, and even in public material, in journals or the web.

For some time I was a bit upset about this. But maybe this is our fault, by often trying to explain univalence only imprecisely, mixing the explanation of the models with the explanation of the underlying Martin-Löf type theory, with none of the two explained sufficiently precisely.

There are long, precise explanations such as the HoTT book, for example, or the various formalizations in Coq, Agda and Lean.

But perhaps we don’t have publicly available material with a self-contained, brief and complete formulation of univalence, so that interested mathematicians and logicians can try to contemplate the axiom in a fully defined form.

So here is an attempt of a  self-contained, brief and complete formulation of Voevodsky’s Univalence Axiom in the arxiv.

This has an Agda file with univalence defined from scratch as an ancillary file, without the use of any library at all, to try to show what the length of a self-contained definition of the univalence type is. Perhaps somebody should add a Coq “version from scratch” of this.

There is also a web version UnivalenceFromScratch to try to make this as accessible as possible, with the text and the Agda code together.

The above notes explain the univalence axiom only. Regarding its role, we recommend Dan Grayson’s introduction to univalent foundations for mathematicians.

05 Mar 15:35

Sparse Identification of Nonlinear Dynamics for Rapid Model Recovery. (arXiv:1803.00894v2 [physics.data-an] UPDATED)

by Markus Quade, Markus Abel, J. Nathan Kutz, Steven L. Brunton

Big data has become a critically enabling component of emerging mathematical methods aimed at the automated discovery of dynamical systems, where first principles modeling may be intractable. However, in many engineering systems, abrupt changes must be rapidly characterized based on limited, incomplete, and noisy data. Many leading automated learning techniques rely on unrealistically large data sets and it is unclear how to leverage prior knowledge effectively to re-identify a model after an abrupt change. In this work, we propose a conceptual framework to recover parsimonious models of a system in response to abrupt changes in the low-data limit. First, the abrupt change is detected by comparing the estimated Lyapunov time of the data with the model prediction. Next, we apply the sparse identification of nonlinear dynamics (SINDy) regression to update a previously identified model with the fewest changes, either by addition, deletion, or modification of existing model terms. We demonstrate this sparse model recovery on several examples for abrupt system change detection in periodic and chaotic dynamical systems. Our examples show that sparse updates to a previously identified model perform better with less data, have lower runtime complexity, and are less sensitive to noise than identifying an entirely new model. The proposed abrupt-SINDy architecture provides a new paradigm for the rapid and efficient recovery of a system model after abrupt changes.

03 Mar 19:51

Nonstandard Integers as Complex Numbers

by John Baez

 

I just read something cool:

• Joel David Hamkins, Nonstandard models of arithmetic arise in the complex numbers, 3 March 2018.

Let me try to explain it in a simplified way. I think all cool math should be known more widely than it is. Getting this to happen requires a lot of explanations at different levels.

Here goes:

The Peano axioms are a nice set of axioms describing the natural numbers. But thanks to Gödel’s incompleteness theorem, these axioms can’t completely nail down the structure of the natural numbers. So, there are lots of different ‘models’ of Peano arithmetic.

These are often called ‘nonstandard’ models. If you take a model of Peano arithmetic—say, your favorite ‘standard’ model —you can get other models by throwing in extra natural numbers, larger than all the standard ones. These nonstandard models can be countable or uncountable. For more, try this:

Nonstandard models of arithmetic, Wikipedia.

Starting with any of these models you can define integers in the usual way (as differences of natural numbers), and then rational numbers (as ratios of integers). So, there are lots of nonstandard versions of the rational numbers. Any one of these will be a field: you can add, subtract, multiply and divide your nonstandard rationals, in ways that obey all the usual basic rules.

Now for the cool part: if your nonstandard model of the natural numbers is small enough, your field of nonstandard rational numbers can be found somewhere in the standard field of complex numbers!

In other words, your nonstandard rationals are a subfield of the usual complex numbers: a subset that’s closed under addition, subtraction, multiplication and division by things that aren’t zero.

This is counterintuitive at first, because we tend to think of nonstandard models of Peano arithmetic as spooky and elusive things, while we tend to think of the complex numbers as well-understood.

However, the field of complex numbers is actually very large, and it has room for many spooky and elusive things inside it. This is well-known to experts, and we’re just seeing more evidence of that.

I said that all this works if your nonstandard model of the natural numbers is small enough. But what is “small enough”? Just the obvious thing: your nonstandard model needs to have a cardinality smaller than that of the complex numbers. So if it’s countable, that’s definitely small enough.

This fact was recently noticed by Alfred Dolich at a pub after a logic seminar at the City University of New York. The proof is very easy if you know this result: any field of characteristic zero whose cardinality is less than or equal to that of the continuum is isomorphic to some subfield of the complex numbers. So, unsurprisingly, it turned out to have been repeatedly discovered before.

And the result I just mentioned follows from this: any two algebraically closed fields of characteristic zero that have the same uncountable cardinality must be isomorphic. So, say someone hands you a field F of characteristic zero whose cardinality is smaller than that of the continuum. You can take its algebraic closure by throwing in roots to all polynomials, and its cardinality won’t get bigger. Then you can throw in even more elements, if necessary, to get a field whose cardinality is that of the continuum. The resulting field must be isomorphic to the complex numbers. So, F is isomorphic to a subfield of the complex numbers.

To round this off, I should say a bit about why nonstandard models of Peano arithmetic are considered spooky and elusive. Tennenbaum’s theorem says that for any countable non-standard model of Peano arithmetic there is no way to code the elements of the model as standard natural numbers such that either the addition or multiplication operation of the model is a computable function on the codes.

We can, however, say some things about what these countable nonstandard models are like as ordered sets. They can be linearly ordered in a way compatible with addition and multiplication. And then they consist of one copy of the standard natural numbers, followed by a lot of copies of the standard integers, which are packed together in a dense way: that is, for any two distinct copies, there’s another distinct copy between them. Furthermore, for any of these copies, there’s another copy before it, and another after it.

I should also say what’s good about algebraically closed fields of characteristic zero: they are uncountably categorical. In other words, any two models of the axioms for an algebraically closed field with the same cardinality must be isomorphic. (This is not true for the countable models: it’s easy to find different countable algebraically closed fields of characteristic zero. They are not spooky and elusive.)

So, any algebraically closed field whose cardinality is that of the continuum is isomorphic to the complex numbers!

For more on the logic of complex numbers, written at about the same level as this, try this post of mine:

The logic of real and complex numbers, Azimuth 8 September 2014.

02 Mar 15:30

Screening of Fungi for the Application of Self-Healing Concrete. (arXiv:1711.10386v6 [q-bio.OT] UPDATED)

by Rakenth R. Menon, Jing Luo, Xiaobo Chen, Preeth Sivakumar, Zhiyong Liu, Guangwen Zhou, Ning Zhang, Congrui Jin

Concrete is susceptible to cracking owing to drying shrinkage, freeze-thaw cycles, delayed ettringite formation, reinforcement corrosion, creep and fatigue, etc. Since maintenance and inspection of concrete infrastructure require onerous labor and high costs, self-healing of harmful cracks without human interference or intervention could be of great attraction. The goal of this study is to explore a new self-healing approach in which fungi are used as a self-healing agent to promote calcium carbonate precipitation to fill the cracks in concrete structures. Recent research results in the field of geomycology have shown that many species of fungi could play an important role in promoting calcium carbonate mineralization, but their application in self-healing concrete has not been reported. Therefore, a screening of different species of fungi has been conducted in this study. Our results showed that, despite the drastic pH increase owing to the leaching of calcium hydroxide from concrete, Aspergillus nidulans (MAD1445), a pH regulatory mutant, could grow on concrete plates and promote calcium carbonate precipitation.

27 Feb 21:30

Also, Pot

by noreply@blogger.com (Atrios)
Waaahhhh...young people don't vote, and they especially don't vote in midterms. Fucking young people.

Be the party that wants to legalize the weed while you can still get some electoral payoff out of it, idiots.
24 Feb 20:37

Communication Melting in Graphs and Complex Networks. (arXiv:1802.07809v1 [physics.soc-ph])

by Najlaa Alalwan, Alex Arenas, Ernesto Estrada

Complex networks are the representative graphs of interactions in many complex systems. Usually, these interactions are abstractions of the communication/diffusion channels between the units of the system. Real complex networks, e.g. traffic networks, reveal different operation phases governed by the dynamical stress of the system. Here we show how, communicability, a topological descriptor that reveals the efficiency of the network functionality in terms of these diffusive paths, could be used to reveal the transitions mentioned. By considering a vibrational model of nodes and edges in a graph/network at a given temperature (stress), we show that the communicability function plays the role of the thermal Green's function of a network of harmonic oscillators. After, we prove analytically the existence of a universal phase transition in the communicability structure of every simple graph. This transition resembles the melting process occurring in solids. For instance, regular-like graphs resembling crystals, melts at lower temperatures and display a sharper transition between connected to disconnected structures than the random spatial graphs, which resemble amorphous solids. Finally, we study computationally this graph melting process in some real-world networks and observe that the rate of melting of graphs changes either as an exponential or as a power-law with the inverse temperature. At the local level we discover that the main driver for node melting is the eigenvector centrality of the corresponding node, particularly when the critical value of the inverse temperature approaches zero. These universal results sheds light on many dynamical diffusive-like processes on networks that present transitions as traffic jams, communication lost or failure cascades.

24 Feb 01:24

An innate circuit for object craving

by Dayu Lin

An innate circuit for object craving

An innate circuit for object craving, Published online: 23 February 2018; doi:10.1038/s41593-018-0087-3

Using a series of functional manipulation and in vivo recording tools, Park et al. identify a pathway from medial preoptic CaMKIIα-expressing neurons to the ventral periaqueductal gray that mediates object craving and prey hunting.
20 Feb 20:34

Metastable state en route to traveling-wave synchronization state

by Jinha Park and B. Kahng

Author(s): Jinha Park and B. Kahng

The Kuramoto model with mixed signs of couplings is known to produce a traveling-wave synchronized state. Here, we consider an abrupt synchronization transition from the incoherent state to the traveling-wave state through a long-lasting metastable state with large fluctuations. Our explanation of t...


[Phys. Rev. E 97, 020203(R)] Published Tue Feb 20, 2018

19 Feb 23:20

Minimal Algorithmic Information Loss Methods for Dimension Reduction, Feature Selection and Network Sparsification. (arXiv:1802.05843v10 [cs.DS] UPDATED)

by Hector Zenil, Narsis A. Kiani, Jesper Tegnér

We introduce a family of unsupervised, domain-free, and asymptotically optimal model-independent algorithms based on the principles of algorithmic probability and information theory designed to minimize the loss of algorithmic information, and thereby avoiding certain deceiving phenomena and distortions known to occur in statistics and entropy-based approaches. Our methods include a lossless-compression-based lossy compression algorithm that can select and coarse-grain data in an algorithmic-complexity fashion (without the use of popular compression algorithms) by collapsing regions that may procedurally be regenerated from a computable candidate model. We show that the method can perform dimension reduction, denoising, feature selection, and network sparsification, while preserving the properties of the objects. As validation case, we demonstrate the methods on image segmentation against popular methods like PCA and random selection, and also demonstrate that the method preserves the graph-theoretic indices measured on a well-known set of synthetic and real-world networks of very different nature, ranging from degree distribution and clustering coefficient to edge betweenness and degree and eigenvector centralities, achieving equal or significantly better results than other data reduction and the leading network sparsification methods (Spectral, Transitive).

16 Feb 20:17

'CAMERA records cell action with new CRISPR tricks

by Cohen, J.
12 Feb 20:28

Viewpoint: Quantum Computer Simulates Excited States of Molecule

by Sukin Sim, Jonathan Romero, Peter D. Johnson, and Alán Aspuru-Guzik
Nosimpler

File with "Neural network used to understand brain".

Author(s): Sukin Sim, Jonathan Romero, Peter D. Johnson, and Alán Aspuru-Guzik

Excited-state energies of the hydrogen molecule have been calculated using a two-qubit quantum computer.


[Physics 11, 14] Published Mon Feb 12, 2018

10 Feb 21:21

Scaling up Echo-State Networks with multiple light scattering. (arXiv:1609.05204v3 [cs.ET] UPDATED)

by Jonathan Dong, Sylvain Gigan, Florent Krzakala, Gilles Wainrib

Echo-State Networks and Reservoir Computing have been studied for more than a decade. They provide a simpler yet powerful alternative to Recurrent Neural Networks, every internal weight is fixed and only the last linear layer is trained. They involve many multiplications by dense random matrices. Very large networks are difficult to obtain, as the complexity scales quadratically both in time and memory. Here, we present a novel optical implementation of Echo-State Networks using light-scattering media and a Digital Micromirror Device. As a proof of concept, binary networks have been successfully trained to predict the chaotic Mackey-Glass time series. This new method is fast, power efficient and easily scalable to very large networks.

10 Feb 16:50

Earth-like and Tardigrade survey of exoplanets. (arXiv:1802.02714v2 [physics.pop-ph] UPDATED)

by MadhuKashyap Jagadeesh, Milena Roszkowska, Lukasz Kaczmarek

Finding life on other worlds is a fascinating area of astrobiology and planetary sciences. Presently, over 3500 exoplanets, representing a very wide range of physical and chemical environments, are known. Tardigrades (water bears) are microscopic invertebrates that inhabit almost all terrestrial, freshwater and marine habitats, from the highest mountains to the deepest oceans. Thanks to their ability to live in a state of cryptobiosis, which is known to be an adaptation to unpredictably fluctuating environmental conditions, these organisms are able to survive when conditions are not suitable for active life; consequently, tardigrades are known as the toughest animals on Earth. In their cryptobiotic state, they can survive extreme conditions, such as temperatures below -250{\deg}C and up to 150{\deg}C, high doses of ultraviolet and ionising radiation, up to 30 years without liquid water, low and high atmospheric pressure, and exposure to many toxic chemicals. Active tardigrades are also resistant to a wide range of unfavourable environmental conditions, which makes them an excellent model organism for astrobiological studies. In our study, we have established a metric tool for distinguishing the potential survivability of active and cryptobiotic tardigrades on rocky-water and water-gas planets in our solar system and exoplanets, taking into consideration the geometrical means of surface temperature and surface pressure of the considered planets. The Active Tardigrade Index (ATI) and Cryobiotic Tardigrade Index (CTI) are two metric indices with minimum value 0 (= tardigrades cannot survive) and maximum 1 (= tardigrades will survive in their respective state). Values between 0 and 1 indicate a percentage chance of the active or cryptobiotic tardigrades surviving on a given exoplanet.

08 Feb 22:15

A Categorical Semantics for Causal Structure

by John Baez

 

The school for Applied Category Theory 2018 is up and running! Students are blogging about papers! The first blog article is about a diagrammatic method for studying causality:

• Joseph Moeller and Dmitry Vagner, A categorical semantics for causal structure, The n-Category Café, 22 January 2018.

Make sure to read the whole blog conversation, since it helps a lot. People were confused about some things at first.

Joseph Moeller is a grad student at U.C. Riverside working with me and a company called Metron Scientific Solutions on “network models”—a framework for designing networks, which the Coast Guard is already interested in using for their search and rescue missions:

• John C. Baez, John Foley, Joseph Moeller and Blake S. Pollard, Network Models. (Blog article here.)

Dmitry Vagner is a grad student at Duke who is very enthusiastic about category theory. Dmitry has worked with David Spivak and Eugene Lerman on open dynamical systems and the operad of wiring diagrams.

It’s great to see these students digging into the wonderful world of category theory and its applications.

To discuss this stuff, please go to The n-Category Café.

07 Feb 19:29

Dynamics of Wealth Inequality. (arXiv:1802.01991v1 [physics.soc-ph])

by Zdzislaw Burda, Pawel Wojcieszak, Konrad Zuchniak

We discuss a statistical model of evolution of wealth distribution, growing inequality and economic stratification in macro-economic scale. Evolution is driven by multiplicative stochastic fluctuations governed by the law of proportionate growth as well as by direct interactions between individuals and interactions between the private and public sectors. We concentrate on the effect of accumulated advantage and study the role of taxation and redistribution in the process of reducing wealth inequality. We show in particular that capital tax may significantly reduce wealth inequality and lead to a stabilisation of the system. The model predicts a decay of the population into economic classes of quantised wealth levels. The effect of economic stratification manifests in a characteristic multimodal form of the histogram of wealth distribution. We observe such a multimodal pattern in empirical data on wealth distribution for urban areas of China.

02 Feb 22:19

Equivalence of restricted Boltzmann machines and tensor network states

by Jing Chen, Song Cheng, Haidong Xie, Lei Wang, and Tao Xiang

Author(s): Jing Chen, Song Cheng, Haidong Xie, Lei Wang, and Tao Xiang

The restricted Boltzmann machine is a fundamental building block of deep learning. The authors demonstrate its equivalence with tensor network states with explicit mappings, thus drawing a constructive connection between deep learning and quantum physics. On one side, deep learning approaches can be used to study novel states of matter. In return, investigations of tensor network states and their expressibility can be adapted to guide neural network architecture design.


[Phys. Rev. B 97, 085104] Published Fri Feb 02, 2018

29 Jan 18:39

Isomorphism between Maximum Lyapunov Exponent and Shannon's Channel Capacity. (arXiv:1706.08638v5 [cond-mat.stat-mech] UPDATED)

by Gerald Friedland, Alfredo Metere
Nosimpler

wut

We demonstrate that the Maximum Lyapunov Exponent for computable dynamical systems is isomorphic to the maximum capacity of a noiseless, memoryless channel in a Shannon communication model. The isomorphism allows the understanding of Lyapunov exponents in the simplified terms of Information Theory, rather than the traditional definitions in Chaos Theory. This work provides a bridge between fundamental physics and Information Theory to the mutual benefit of both fields. The result suggests, among other implications, that machine learning and other information theory methods can be successfully employed at the core of physics simulations.

27 Jan 17:34

Police Union Privileges

by Alex Tabarrok

Earlier I wrote about how police unions around the country give to every officer dozens of “get out of jail” cards to give to friends, family, politicians, lawyers, judges and other connected people. The cards let police on the street know that the subject is to be given “professional courtesy” and they can be used to get out of speeding tickets and other infractions. Today, drawing on the Police Union Contracting Project, I discuss how union contracts and Law Officer “Bill of Rights” give police legal privileges that regular people don’t get.

In 50 cities and 13 states, for example, union contracts “restrict interrogations by limiting how long an officer can be interrogated, who can interrogate them, the types of questions that can be asked, and when an interrogation can take place.” In Virginia police officers have a right to at least a five-day delay before being interrogated. In Louisiana police officers have up to 30 days during which no questioning is allowed and they cannot be questioned for sustained periods of time or without breaks. In some cities, police officers can only be interrogated during work hours. Regular people do not get these privileges.

The key to a good interrogation is that the suspect doesn’t know what the interrogator knows so the suspect can be caught in a lie which unravels their story. Thus, the Florida Police Bill of Rights is stunning in what it allows police officers:

The law enforcement officer or correctional officer under investigation must be informed of the nature of the investigation before any interrogation begins, and he or she must be informed of the names of all complainants. All identifiable witnesses shall be interviewed, whenever possible, prior to the beginning of the investigative interview of the accused officer. The complaint, all witness statements, including all other existing subject officer statements, and all other existing evidence, including, but not limited to, incident reports, GPS locator information, and audio or video recordings relating to the incident under investigation, must be provided to each officer who is the subject of the complaint before the beginning of any investigative interview of that officer.

By knowing what the interrogators know, the suspect can craft a story that fits the known facts–and the time privilege gives them the opportunity to do so.

Moreover, how do you think complainants feel knowing that the police officer they are complaining about “must be informed of the names of all complainants.” I respect and admire police officers but frankly I think this rule is dangerous. Would you come forward?

How effective would criminal interrogations be if the following rules held for ordinary citizens?

The law enforcement officer or correctional officer under interrogation may not be subjected to offensive language or be threatened with transfer, dismissal, or disciplinary action. A promise or reward may not be made as an inducement to answer any questions.

What does it say about our justice system that the police don’t want their own tactics used against them?

In the United States if you are arrested–even for a misdemeanor or minor crime, even if the charges are dropped, even if you are found not guilty–you will likely be burdened with an arrest record that can increase the difficulty of getting a job, an occupational license, or housing. But even in the unlikely event that a police officer is officially reprimanded many states and cities require that such information is automatically erased after a year or two. The automatic erasure of complaints makes it difficult to identify problem officers or a pattern of abuse.

Louisiana’s Police Officer Bill of Rights is one of the most extreme. It states that police have the right to expunge any violation of criminal battery and assault and any violation of criminal laws involving an “obvious domestic abuse.” Truly this is hard to believe but here is the law (note that sections (2)(a) and (b) do not appear, as I read it, to be limited to anonymous or unsubstantiated complaints).

A law enforcement officer, upon written request, shall have any record of a formal complaint made against the officer for any violation of a municipal or parish ordinance or state criminal statute listed in Paragraph (2) of this Subsection involving domestic violence expunged from his personnel file, if the complaint was made anonymously to the police department and the charges are not substantiated within twelve months of the lodging of the complaint.
(2)(a) Any violation of a municipal or parish ordinance or state statute defining criminal battery and assault.
(b) Any violation of other municipal or parish ordinances or state statutes including criminal trespass, criminal damage to property, or disturbing the peace if the incident occurred at either the home of the victim or the officer or the violation was the result of an obvious domestic dispute.

In an excellent post on get out of free jail cards, Julian Sanchez writes:

…beyond being an affront to the ideal of the rule of law in the abstract, it seems plausible that these “get out of jail free” cards help to reinforce the sort of us-against-them mentality that alienates so many communities from their police forces. Police departments that want to demonstrate they’re serious about the principle of equality under the law shouldn’t be debating how many of these cards an average cop gets to hand out; they should be scrapping them entirely.

Equality under the law also requires that privileges and immunities extend to all citizens equally.

Hat tip: Tate Fegley.

The post Police Union Privileges appeared first on Marginal REVOLUTION.

24 Jan 07:43

Statebox: A Universal Language of Distributed Systems

by john
MathML-enabled post (click for more details).

We’re getting a lot of great posts here this week, but I also want to point out this, by one grad students:

A brief teaser follows, in case you’re wondering what this is about.

MathML-enabled post (click for more details).

The Azimuth Project’s research on networks and applied category theory has taken an interesting new turn. I always meant for it to do something useful, but I’m too theoretical to pull that off myself. Luckily there are plenty of other people with similar visions whose feet are a bit more firmly on the ground.

I first met the young Dutch hacktivist Jelle Herold at a meeting on network theory that I helped run in Torinio. I saw him again at a Simons Institute meeting on compositionality in computer science. He was already talking about his new startup.

Now it’s here. It’s called Statebox. Among other things, it’s an ambitious attempt to combine categories, open games, dependent types, Petri nets, string diagrams, and blockchains into a universal language for distributed systems.

Herold is inviting academics to help. I want to. But I couldn’t go to the Croatian island of Zlarin at the drop of a hat during classes. Luckily, my grad student Christian Williams is fascinated by the idea of using category theory and blockchain technology to do something good for the world: that’s why he came to work with me! So, I sent him to the first Statebox summit as my deputy. Now he has reported back. A snippet:

Zlarin is a lovely place, but we haven’t gotten to the best part — the people. All who attended are brilliant, creative, and spirited. Everyone’s eyes had a unique spark to light. I don’t think I’ve ever met such a fascinating group in my life. The crew: Jelle, Anton, Emi Gheorghe, Fabrizio Genovese, Daniel van Dijk, Neil Ghani, Viktor Winschel, Philipp Zahn, Pawel Sobocinski, Jules Hedges, Andrew Polonsky, Robin Piedeleu, Alex Norta, Anthony di Franco, Florian Glatz, Fredrik Nordvall Forsberg. These innovators have provocative and complementary ideas in category theory, computer science, open game theory, functional programming, and the blockchain industry; and they came to share an important goal. These are people who work earnestly to better humanity, motivated by progress, not profit. Talking with them gave me hope, that there are enough intelligent, open-minded, and caring people to fix this mess of modern society. In our short time together, we connected — now, almost all continue to contribute and grow the endeavor.

Ah, the starry-eyed idealism of youth! I’m feeling a bit beaten down by the events of the last year, so it’s nice (though somewhat unnerving) to see someone who is not. Read the whole article for more details about this endeavor.

23 Jan 05:16

Statebox: A Universal Language of Distributed Systems

by John Baez

guest post by Christian Williams

A short time ago, on the Croatian island of Zlarin, there gathered a band of bold individuals—rebels of academia and industry, whose everyday thoughts and actions challenge the separations of the modern world. They journeyed from all over to learn of the grand endeavor of another open mind, an expert functional programmer and creative hacktivist with significant mathematical knowledge: Jelle |yell-uh| Herold.

The Dutch computer scientist has devoted his life to helping our species and our planet: from consulting in business process optimization to winning a Greenpeace hackathon, from updating Netherlands telecommunications to creating a website to determine ways for individuals to help heal the earth, Jelle has gained a comprehensive perspective of the interconnected era. Through a diverse and innovative career, he has garnered crucial insights into software design and network computation—most profoundly, he has realized that it is imperative that these immense forces of global change develop thoughtful, comprehensive systematization.

Jelle understood that initiating such a grand ambition requires a massive amount of work, and the cooperation of many individuals, fluent in different fields of mathematics and computer science. Enter the Zlarin meeting: after a decade of consideration, Jelle has now brought together proponents of categories, open games, dependent types, Petri nets, string diagrams, and blockchains toward a singular end: a universal language of distributed systems—Statebox.

Statebox is a programming language formed and guided by fundamental concepts and principles of theoretical mathematics and computer science. The aim is to develop the canonical process language for distributed systems, and thereby elucidate the way these should actually be designed. The idea invokes the deep connections of these subjects in a novel and essential way, to make code simple, transparent, and concrete. Category theory is both the heart and pulse of this endeavor; more than a theory, it is a way of thinking universally. We hope the project helps to demonstrate the importance of this perspective, and encourages others to join.

The language is designed to be self-optimizing, open, adaptive, terminating, error-cognizant, composable, and most distinctively—visual. Petri nets are the natural representation of decentralized computation and concurrency. By utilizing them as program models, the entire language is diagrammatic, and this allows one to inspect the flow of the process executed by the program. While most languages only compile into illegible machine code, Statebox compiles directly into diagrams, so that the user immediately sees and understands the concrete realization of the abstract design. We believe that this immanent connection between the “geometric” and “algebraic” aspects of computation is of great importance.

Compositionality is a rightfully popular contemporary term, indicating the preservation of type under composition of systems or processes. This is essential to the universality of the type, and it is intrinsic to categories, which underpin the Petri net. A pertinent example is that composition allows for a form of abstraction in which programs do not require complete specification. This is parametricity: a program becomes executable when the functions are substituted with valid terms. Every term has a type, and one cannot connect pieces of code that have incompatible inputs and outputs—the compiler would simply produce an error. The intent is to preserve a simple mathematical structure that imposes as little as possible, and still ensure rationality of code. We can then more easily and reliably write tools providing automatic proofs of termination and type-correctness. Many more aspects will be explained as we go along, and in more detail in future posts.

Statebox is more than a specific implementation. It is an evolving aspiration, expressing an ideal, a source of inspiration, signifying a movement. We fully recognize that we are at the dawn of a new era, and do not assume that the current presentation is the best way to fulfill this ideal—but it is vital that this kind of endeavor gains the hearts and minds of these communities. By learning to develop and design by pure theory, we make a crucial step toward universal systems and knowledge. Formalisms are biased, fragile, transient—thought is eternal.

Thank you for reading, and thank you to John Baez—|bi-ez|, some there were not aware—for allowing me to write this post. Azimuth and its readers represent what scientific progress can and should be; it is an honor to speak to you. My name is Christian Williams, and I have just begun my doctoral studies with Dr. Baez. He received the invitation from Jelle and could not attend, and was generous enough to let me substitute. Disclaimer: I am just a young student with big dreams, with insufficient knowledge to do justice to this huge topic. If you can forgive some innocent confidence and enthusiasm, I would like to paint a big picture, to explain why this project is important. I hope to delve deeper into the subject in future posts, and in general to celebrate and encourage the cognitive revolution of Applied Category Theory. (Thank you also to Anton and Fabrizio for providing some of this writing when I was not well; I really appreciate it.)

Statebox Summit, Zlarin 2017, was awesome. Wish you could’ve been there. Just a short swim in the Adriatic from the old city of Šibenik |shib-enic|, there lies the small, green island of Zlarin |zlah-rin|, with just a few hundred kind inhabitants. Jelle’s friend, and part of the Statebox team, Anton Livaja and his family graciously allowed us to stay in their houses. Our headquarters was a hotel, one of the few places open in the fall. We set up in the back dining room for talks and work, and for food and sunlight we came to the patio and were brought platters of wonderful, wholesome Croatian dishes. As we burned the midnight oil, we enjoyed local beer, and already made history—the first Bitcoin transaction of the island, with a progressive bartender, Vinko.

Zlarin is a lovely place, but we haven’t gotten to the best part—the people. All who attended are brilliant, creative, and spirited. Everyone’s eyes had a unique spark to light. I don’t think I’ve ever met such a fascinating group in my life. The crew: Jelle, Anton, Emi Gheorghe, Fabrizio Genovese, Daniel van Dijk, Neil Ghani, Viktor Winschel, Philipp Zahn, Pawel Sobocinski, Jules Hedges, Andrew Polonsky, Robin Piedeleu, Alex Norta, Anthony di Franco, Florian Glatz, Fredrik Nordvall Forsberg. These innovators have provocative and complementary ideas in category theory, computer science, open game theory, functional programming, and the blockchain industry; and they came to share an important goal. These are people who work earnestly to better humanity, motivated by progress, not profit. Talking with them gave me hope, that there are enough intelligent, open-minded, and caring people to fix this mess of modern society. In our short time together, we connected—now, almost all continue to contribute and grow the endeavor.

Why is society a mess? The present human condition is absurd. We are in a cognitive renaissance, yet our world is in peril. We need to realize a deeper harmony of theory and practice—we need ideas that dare to dream big, that draw on the vast wealth of contemporary thought to guide and unite subjects in one mission. The way of the world is only a reflection of how we choose to think, and for more than a century we have delved endlessly into thought itself. If we truly learn from our thought, knowledge and application become imminently interrelated, not increasingly separate. It is imperative that we abandon preconception, pretense and prejudice, and ask with naive sincerity: “How should things be, really, and how can we make it happen?”

This pertains more generally to the irresponsibly ad hoc nature of society—we find ourselves entrenched in inadequate systems. Food, energy, medicine, finance, communications, media, governance, technology—our deepening dependence on centralization is our greatest vulnerability. Programming practice is the perfect example of the gradual failure of systems when their design is left to wander in abstraction. As business requirements evolved, technological solutions were created haphazardly, the priority being immediate return over comprehensive methodology, which resulted in ‘duct-taped’ systems, such as the Windows OS. Our entire world now depends on unsystematic software, giving rise to so much costly disorganization, miscommunication, and worse, bureaucracy. Statebox aims to close the gap between the misguided formalisms which came out of this type of degeneration, and design a language which corresponds naturally to essential mathematical concepts—to create systems which are rational, principled, universal. To explain why Statebox represents to us such an important ideal, we must first consider its closest relative, the elephant in the technological room: blockchain.

Often the best ideas are remarkably simple—in 2008, an unknown person under the alias Satoshi Nakamoto published the whitepaper Bitcoin: A Peer-to-Peer Electronic Cash System. In just a few pages, a protocol was proposed which underpins a new kind of computational network, called a blockchain, in which interactions are immediate, transparent, and permanent. This is a personal interpretation—the paper focuses on the application given in its title. In the original financial context, immediacy is one’s ability to directly transact with anyone, without intermediaries, such as banks; transparency is one’s right to complete knowledge of the economy in which one participates, meaning that each node owns a copy of the full history of the network; permanence is the irrevocability of one’s transactions. These core aspects are made possible by an elegant use of cryptography and game theory, which essentially removes the need for trusted third parties in the authorization, verification, and documentation of transactions. Per word, it’s almost peerless in modern influence; the short and sweet read is recommended.

The point of this simplistic explanation is that blockchain is about more than economics. The transaction could be any cooperation, the value could be any social good—when seen as a source of consensus, the blockchain protocol can be expanded to assimilate any data and code. After several years of competing cryptocurrencies, the importance of this deeper idea was gradually realized. There arose specialized tools to serve essential purposes in some broader system, and only recently have people dared to conceive of what this latter could be. In 2014, a wunderkind named Vitalik Buterin created Ethereum, a fully programmable blockchain. Solidity is a Turing-complete language of smart contracts, autonomous programs which enable interactions and enact rules on the network. With this framework, one can not only transact with others, but implement any kind of process; one can build currencies, websites, or organizations—decentralized applications, constructed with smart contracts, could be just about anything.

There is understandably great confidence and excitement for these ventures, and many are receiving massive public investment. Seriously, the numbers are staggering—but most of it is pure hype. There is talk of the first global computer, the internet of value, a single shared source of truth, and other speculative descriptions. But compared to the ambition, the actual theory is woefully underdeveloped. So far, implementations make almost no use of the powerful ideas of mathematics. There are still basic flaws in blockchain itself, the foundation of almost all decentralized technology. For example, the two viable candidates for transaction verification are called Proof of Work and Proof of Stake: the former requires unsustainable consumption of resources, namely hardware and electricity, and the latter is susceptible to centralization. Scalability is a major problem, thus also cost and speed of transactions. A major Ethereum dApp, Decentralized Autonomous Organization, was hacked.

These statements are absolutely not to disregard all of the great work of this community; it is primarily rhetoric to distinguish the high ideals of Statebox, and I lack the eloquence to make the point diplomatically, nor near the knowledge to give a real account of this huge endeavor. We now return to the rhetoric.

What seems to be lost in the commotion is the simple recognition that we do not yet really know what we should make, nor how to do so. The whole idea is simply too big—the space of possibility is almost completely unknown, because this innovation can open every aspect of society to reform. But as usual, people try to ignore their ignorance, imagining it will disappear, and millions clamor about things we do not yet understand. Most involved are seeing decentralization as an exciting business venture, rather than our best hope to change the way of this broken world; they want to cash in on another technological wave. Of the relatively few idealists, most still retain the assumptions and limitations of the blockchain.

For all this talk, there is little discussion of how to even work toward the ideal abstract design. Most mathematics associated to blockchain is statistical analysis of consensus, while we’re sitting on a mountain of powerful categorical knowledge of systems. At the summit, Prof. Neil Ghani said “it’s like we’re on the Moon, talking about going to Mars, while everyone back on Earth still doesn’t even have fire.” We have more than enough conceptual technology to begin developing an ideal and comprehensive system, if the right minds come together. Theory guides practice, practice motivates theory—the potential is immense.

Fortunately, there are those who have this big picture in mind. Long before the blockchain craze, Jelle saw the fundamental importance of both distributed systems and the need for academic-industrial symbiosis. In the mid-2000’s, he used Petri nets to create process tools for businesses. Employees could design and implement any kind of abstract workflow to more effectively communicate and produce. Jelle would provide consultation to optimize these processes, and integrate them into their existing infrastructure—as it executed, it would generate tasks, emails, forms and send them to designated individuals to be completed for the next iteration. Many institutions would have to shell out millions of dollars to IBM or Fujitsu for this kind of software, and his was more flexible and intuitive. This left a strong impression on Jelle, regarding the power of Petri nets and the impact of deliberate design.

Many experiences like this gradually instilled in Jelle a conviction to expand his knowledge and begin planning bold changes to the world of programming. He attended mathematics conferences, and would discuss with theorists from many relevant subjects. On the island, he told me that it was actually one of Baez’s talks about networks which finally inspired him to go for this huge idea. By sincerely and openly reaching out to the whole community, Jelle made many valuable connections. He invited these thinkers to share his vision—theorists from all over Europe, and some from overseas, gathered in Croatia to learn and begin to develop this project—and it was a great success.

By now you may be thinking, alright kid spill the beans already. Here they are, right into your brain—well, most will be in the next post, but we should at least have a quick overview of some of the main ideas not already discussed.

The notion of open system complements compositionality. The great difference between closure and openness, in society as well as theory, was a central theme in many of our conversations during the summit. Although we try to isolate and suspend life and cognition in abstraction, the real, concrete truth is what flows through these ethereal forms. Every system in Statebox is implicitly open, and this impels design to idealize the inner and outer connections of processes. Open systems are central to the Baez Network Theory research team. There are several ways to categorically formalize open systems; the best are still being developed, but the first main example can be found in The Algebra of Open and Interconnected Systems by Brendan Fong, an early member of the team.

Monoidal categories, as this blog knows well, represent systems with both series and parallel processes. One of the great challenge of this new era of interconnection is distributed computation—getting computers to work together as a supercomputer, and monoidal categories are essential to this. Here, objects are data types, and morphisms are computations, while composition is serial and tensor is parallel. As Dr. Baez has demonstrated with years of great original research, monoidal categories are essential to understanding the complexity of the world. If we can connect our knowledge of natural systems to social systems, we can learn to integrate valuable principles—a key example being complete resource cognizance.

Petri nets are presentations of free strict symmetric monoidal categories, and as such they are ideal models of “normal” computation, i.e. associative, unital, and commutative. Open Petri nets are the workhorses of Statebox. They are the morphisms of a category which is itself monoidal—and via openness it is even richer and more versatile. Most importantly it is compact closed, which introduces a simple but crucial duality into computation—input-output interchange—which is impossible in conventional cartesian closed computation, and actually brings the paradigm closer to quantum computation

Petri nets represent processes in an intuitive, consistent, and decentralized way. These will be multi-layered via the notion of operad and a resourceful use of Petri net tokens, representing the interacting levels of a system. Compositionality makes exploring their state space much easier: the state space of a big process can be constructed from those of smaller ones, a technique that more often than not avoids state space explosion, a long-standing problem in Petri net analysis. The correspondence between open Petri nets and a logical calculus, called place/transition calculus, allows the user to perform queries on the Petri net, and a revolutionary technique called information-gain computing greatly reduces response time.

Dependently typed functional programming is the exoskeleton of this beautiful beast; in particular, the underlying language is Idris. Dependent types arose out of both theoretical mathematics and computer science, and they are beginning to be recognized as very general, powerful, and natural in practice. Functional programming is a similarly pure and elegant paradigm for “open” computation. They are fascinating and inherently categorical, and deserve whole blog posts in the future.

Even economics has opened its mind to categories. Statebox is very fortunate to have several of these pioneers—open game theory is a categorical, compositional version of game theory, which allows the user to dynamically analyze and optimize code. Jules’ choice of the term “teleological category” is prescient; it is about more than just efficiency—it introduces the possibility of building principles into systems, by creating game-theoretical incentives which can guide people to cooperate for the greater good, and gradually lessen the influence of irrational, selfish priorities.

Categories are the language by which Petri nets, functional programming, and open games can communicate—and amazingly, all of these theories are unified in an elegant representation called string diagrams. These allow the user to forget the formalism, and reason purely in graphical terms. All the complex mathematics goes under the hood, and the user only needs to work with nodes and strings, which are guaranteed to be formally correct.

Category theory also models the data structures that are used by Statebox: Typedefs is a very lightweight—but also very expressive—data structure, that is at the very core of Statebox. It is based on initial F-algebras, and can be easily interpreted in a plethora of pre-existing solutions, enabling seamless integration with existing systems. One of the core features of Typedefs is that serialization is categorically internalized in the data structure, meaning that every operation involving types can receive a unique hash and be recorded on the blockchain public ledger. This is one of the many components that make Statebox fail-resistant: every process and event is accounted for on the public ledger, and the whole history of a process can be rolled back and analyzed thanks to the blockchain technology.

The Statebox team is currently working on a monograph that will neatly present how all the pertinent categorical theories work together in Statebox. This is a formidable task that will take months to complete, but will also be the cleanest way to understand how Statebox works, and which mathematical questions have still to be answered to obtain a working product. It will be a thorough document that also considers important aspects such as our guiding ethics.

The team members are devoted to creating something positive and different, explicitly and solely to better the world. The business paradigm is based on the principle that innovation should be open and collaborative, rather than competitive and exclusive. We want to share ideas and work with you. There are many blooming endeavors which share the ideals that have been described in this article, and we want them all to learn from each other and build off one another.

For example, Statebox contributor and visionary economist Viktor Winschel has a fantastic project called Oicos. The great proponent of applied category theory, David Spivak, has an exciting and impressive organization called Categorical Informatics. Mike Stay, a past student of Dr. Baez, has started a company called Pyrofex, which is developing categorical distributed computation. There are also somewhat related languages for blockchain, such as Simplicity, and innovative distributed systems such as Iota and RChain. Even Ethereum is beginning to utilize categories, with Casper. And of course there are research groups, such as Network Theory and Mathematically Structured Programming, as well as so many important papers, such as Algebraic Databases. This is just a slice of everything going on; as far as I know there is not yet a comprehensive account of all the great applied category theory and distributed innovations being developed. Inevitably these endeavors will follow the principle they share, and come together in a big way. Statebox is ready, willing, and able to help make this reality.

If you are interested in Statebox, you are welcomed with open arms. You can contact Jelle at jelle@statebox.io, Fabrizio at fabrizio@statebox.org, Emi at emi@statebox.io, Anton at anton@statebox.io; they can provide more information, connect you to the discussion, or anything else. There will be a second summit in 2018 in about six months, details to be determined. We hope to see you there. Future posts will keep you updated, and explain more of the theory and design of Statebox. Thank you very much for reading.

P.S. Found unexpected support in Šibenik! Great bar—once a reservoir.

20 Jan 21:42

Exploring Hidden Acoustic Spin Underwater. (arXiv:1801.05790v1 [physics.app-ph])

by Yang Long, Jie Ren, Hong Chen

Investigating wave propagation in fluid enables a variety of important applications in underwater communications, object detections and unmanned robot control. Conventionally, momentum and spin reveal fundamental physical properties about propagating waves. Yet, vast previous research focused on the orbital angular momentum of acoustics without thinking about the existence possibility of spin due to the longitudinal wave nature. Here, we show that underwater acoustic wave processes the non-trivial spin angular momentum intrinsically, which is associated with its special spin-orbital coupling relation for longitudinal waves. Furthermore, we demonstrate that this intrinsic spin, although unobservable in plane wave form, can be detected by four approaches: wave interference, Gaussian exponential decay form, boundary evanescent wave, and waveguide mode. We further show that the strong spin-orbital coupling can be exploited to achieve unidirectional excitation and backscattering immune transport. We hope the present results can improve the geometric and topological understanding about underwater acoustic wave and pave the way on the spin-related underwater applications.

20 Jan 00:52

When you only have a few minutes to live...

by Minnesotastan

... your priorities change.  The graph shows viewership at Pornhub at the time of the (false) nuclear
bomb warning.  Via Boing Boing.
18 Jan 23:26

Neuronal oscillations: unavoidable and useful?

by Wolf Singer

Abstract

Neuronal systems have a high propensity to engage in oscillatory activity because both the properties of individual neurons and canonical circuit motifs favour rhythmic activity. In addition, coupled oscillators can engage in a large variety of dynamical regimes, ranging from synchronization with different phase offsets to chaotic behaviour. Which regime prevails depends on differences between preferred oscillation frequencies, coupling strength and coupling delays. The ability of delay coupled oscillator networks to generate a rich repertoire of temporally structured activation sequences is exploited by central pattern generator networks for the control of movements. However, it is less clear whether temporal patterning of neuronal discharges also plays a role in cognitive processes. Here, it will be argued that the temporal patterning of neuronal discharges emerging from delay coupled oscillator networks plays a pivotal role in all instances in which selective relations have to be established between the responses of distributed assemblies of neurons. Examples are the dynamic formation of functional networks, the selective routing of activity in densely interconnected networks, the attention-dependent selection of sensory signals, the fast and context-dependent binding of responses for further joint processing in pattern recognition and the formation of associations by learning. Special consideration is given to arguments that challenge a functional role of oscillations and synchrony in cognition because of the volatile nature of these phenomena and recent evidence will be reviewed suggesting that this volatility is functionally advantageous.

Thumbnail image of graphical abstract

This review discusses the functional role of neuronal oscillations and synchrony. The cerebral cortex is considered as a delay coupled recurrent network whose nodes consist of feature selective oscillatory microcircuits. It is concluded that the emerging dynamics endow neuronal responses with the temporal structure required for the definition of semantic relations in the context of feature binding, the formation of functional networks and the establishment of engrams.

18 Jan 00:26

Retrotransposons

by John Baez

This article is very interesting:

• Ed Yong, Brain cells share information with virus-like capsules, Atlantic, January 12, 2018.

Your brain needs a protein called Arc. If you have trouble making this protein, you’ll have trouble forming new memories. The neuroscientist Jason Shepherd noticed something weird:

He saw that these Arc proteins assemble into hollow, spherical shells that look uncannily like viruses. “When we looked at them, we thought: What are these things?” says Shepherd. They reminded him of textbook pictures of HIV, and when he showed the images to HIV experts, they confirmed his suspicions. That, to put it bluntly, was a huge surprise. “Here was a brain gene that makes something that looks like a virus,” Shepherd says.

That’s not a coincidence. The team showed that Arc descends from an ancient group of genes called gypsy retrotransposons, which exist in the genomes of various animals, but can behave like their own independent entities. They can make new copies of themselves, and paste those duplicates elsewhere in their host genomes. At some point, some of these genes gained the ability to enclose themselves in a shell of proteins and leave their host cells entirely. That was the origin of retroviruses—the virus family that includes HIV.

It’s worth pointing out that gypsy is the name of a specific kind of retrotransposon. A retrotransposon is a gene that can make copies of itself by first transcribing itself from DNA into RNA and then converting itself back into DNA and inserting itself at other places in your chromosomes.

About 40% of your genes are retrotransposons! They seem to mainly be ‘selfish genes’, focused on their own self-reproduction. But some are also useful to you.

So, Arc genes are the evolutionary cousins of these viruses, which explains why they produce shells that look so similar. Specifically, Arc is closely related to a viral gene called gag, which retroviruses like HIV use to build the protein shells that enclose their genetic material. Other scientists had noticed this similarity before. In 2006, one team searched for human genes that look like gag, and they included Arc in their list of candidates. They never followed up on that hint, and “as neuroscientists, we never looked at the genomic papers so we didn’t find it until much later,” says Shepherd.

I love this because it confirms my feeling that viruses are deeply entangled with our evolutionary past. Computer viruses are just the latest phase of this story.

As if that wasn’t weird enough, other animals seem to have independently evolved their own versions of Arc. Fruit flies have Arc genes, and Shepherd’s colleague Cedric Feschotte showed that these descend from the same group of gypsy retrotransposons that gave rise to ours. But flies and back-boned animals co-opted these genes independently, in two separate events that took place millions of years apart. And yet, both events gave rise to similar genes that do similar things: Another team showed that the fly versions of Arc also sends RNA between neurons in virus-like capsules. “It’s exciting to think that such a process can occur twice,” says Atma Ivancevic from the University of Adelaide.

This is part of a broader trend: Scientists have in recent years discovered several ways that animals have used the properties of virus-related genes to their evolutionary advantage. Gag moves genetic information between cells, so it’s perfect as the basis of a communication system. Viruses use another gene called env to merge with host cells and avoid the immune system. Those same properties are vital for the placenta—a mammalian organ that unites the tissues of mothers and babies. And sure enough, a gene called syncytin, which is essential for the creation of placentas, actually descends from env. Much of our biology turns out to be viral in nature.

Here’s something I wrote in 1998 when I was first getting interested in this business:

RNA reverse transcribing viruses

RNA reverse transcribing viruses are usually called retroviruses. They have a single-stranded RNA genome. They infect animals, and when they get inside the cell’s nucleus, they copy themselves into the DNA of the host cell using reverse transcriptase. In the process they often cause tumors, presumably by damaging the host’s DNA.

Retroviruses are important in genetic engineering because they raised for the first time the possibility that RNA could be transcribed into DNA, rather than the reverse. In fact, some of them are currently being deliberately used by scientists to add new genes to mammalian cells.

Retroviruses are also important because AIDS is caused by a retrovirus: the human immunodeficiency virus (HIV). This is part of why AIDS is so difficult to treat. Most usual ways of killing viruses have no effect on retroviruses when they are latent in the DNA of the host cell.

From an evolutionary viewpoint, retroviruses are fascinating because they blur the very distinction between host and parasite. Their genome often contains genetic information derived from the host DNA. And once they are integrated into the DNA of the host cell, they may take a long time to reemerge. In fact, so-called endogenous retroviruses can be passed down from generation to generation, indistinguishable from any other cellular gene, and evolving along with their hosts, perhaps even from species to species! It has been estimated that up to 1% of the human genome consists of endogenous retroviruses! Furthermore, not every endogenous retrovirus causes a noticeable disease. Some may even help their hosts.

It gets even spookier when we notice that once an endogenous retrovirus lost the genes that code for its protein coat, it would become indistinguishable from a long terminal repeat (LTR) retrotransposon—one of the many kinds of “junk DNA” cluttering up our chromosomes. Just how much of us is made of retroviruses? It’s hard to be sure.

For my whole article, go here:

Subcellular life forms.

It’s about the mysterious subcellular entities that stand near the blurry border between the living and the non-living—like viruses, viroids, plasmids, satellites, transposons and prions. I need to update it, since a lot of new stuff is being discovered!

Jason Shepherd’s new paper has a few other authors:

• Elissa D. Pastuzyn, Cameron E. Day, Rachel B. Kearns, Madeleine Kyrke-Smith, Andrew V. Taibi, John McCormick, Nathan Yoder, David M. Belnap, Simon Erlendsson, Dustin R. Morado, John A.G. Briggs, Cédric Feschotte and Jason D. Shepherd, The neuronal gene Arc encodes a repurposed retrotransposon gag protein that mediates intercellular RNA transfer, Cell 172 (2018), 275–288.

16 Jan 11:54

Nonequilibrium limit cycle oscillators: fluctuations in hair bundle dynamics. (arXiv:1801.04514v2 [cond-mat.stat-mech] UPDATED)

by Janaki Sheth, Dolores Bozovic, Alex Levine

We develop a framework for the general interpretation of the stochastic dynamical system near a limit cycle. Such quasi-periodic dynamics are commonly found in a variety of nonequilibrium systems, including the spontaneous oscillations of hair cells in the inner ear. We demonstrate quite generally that in the presence of noise, the phase of the limit cycle oscillator will diffuse while deviations in the directions locally orthogonal to that limit cycle will display the Lorentzian power spectrum of a damped oscillator. We identify two mechanisms by which these stochastic dynamics can acquire a complex frequency dependence, and discuss the deformation of the mean limit cycle as a function of temperature. The theoretical ideas are applied to data obtained from spontaneously oscillating hair cells of the amphibian sacculus.

16 Jan 11:53

Social Advantage with Mixed Entangled States. (arXiv:1801.04403v2 [quant-ph] UPDATED)

by Aritra Das, Pratyusha Chowdhury

It has been extensively shown in past literature that Bayesian Game Theory and Quantum Non-locality have strong ties between them. Pure Entangled States have been used, in both common and conflict interest games, to gain advantageous payoffs, both at the individual and social level. In this paper we construct a game for a Mixed Entangled State such that this state gives higher payoffs than classically possible, both at the individual level and the social level. Also, we use the I-3322 inequality so that states that aren't helpful as advice for Bell-CHSH inequality can also be used. Finally, the measurement setting we use is a Restricted Social Welfare Strategy (given this particular state).

15 Jan 01:14

Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions.

by Yamada T, Murata S, Arie H, Ogata T
Related Articles

Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions.

Front Neurorobot. 2017;11:70

Authors: Yamada T, Murata S, Arie H, Ogata T

Abstract
An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as "not," "and," and "or" simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human-robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as "true," "false," and "not" work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word "and," which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word "or," which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system.

PMID: 29311891 [PubMed]

15 Jan 01:10

Amphetamine Reverses Escalated Cocaine Intake via Restoration of Dopamine Transporter Conformation

by Siciliano, C. A., Saha, K., Calipari, E. S., Fordahl, S. C., Chen, R., Khoshbouei, H., Jones, S. R.

Cocaine abuse disrupts dopamine system function, and reduces cocaine inhibition of the dopamine transporter (DAT), which results in tolerance. Although tolerance is a hallmark of cocaine addiction and a DSM-V criterion for substance abuse disorders, the molecular adaptations producing tolerance are unknown, and testing the impact of DAT changes on drug taking behaviors has proven difficult. In regard to treatment, amphetamine has shown efficacy in reducing cocaine intake; however, the mechanisms underlying these effects have not been explored. The goals of this study were twofold; we sought to (1) identify the molecular mechanisms by which cocaine exposure produces tolerance and (2) determine whether amphetamine-induced reductions in cocaine intake are connected to these mechanisms. Using cocaine self-administration and fast-scan cyclic voltammetry in male rats, we show that low-dose, continuous amphetamine treatment, during self-administration or abstinence, completely reversed cocaine tolerance. Amphetamine treatment also reversed escalated cocaine intake and decreased motivation to obtain cocaine as measured in a behavioral economics task, thereby linking tolerance to multiple facets of cocaine use. Finally, using fluorescence resonance energy transfer imaging, we found that cocaine tolerance is associated with the formation of DAT-DAT complexes, and that amphetamine disperses these complexes. In addition to extending our basic understanding of DATs and their role in cocaine reinforcement, we serendipitously identified a novel therapeutic target: DAT oligomer complexes. We show that dispersion of oligomers is concomitant with reduced cocaine intake, and propose that pharmacotherapeutics aimed at these complexes may have potential for cocaine addiction treatment.

SIGNIFICANCE STATEMENT Tolerance to cocaine's subjective effects is a cardinal symptom of cocaine addiction and a DSM-V criterion for substance abuse disorders. However, elucidating the molecular adaptions that produce tolerance and determining its behavioral impact have proven difficult. Using cocaine self-administration in rats, we link tolerance to cocaine effects at the dopamine transporter (DAT) with aberrant cocaine-taking behaviors. Further, tolerance was associated with multi-DAT complexes, which formed after cocaine exposure. Treatment with amphetamine deconstructed DAT complexes, reversed tolerance, and decreased cocaine seeking. These data describe the behavioral consequence of cocaine tolerance, provide a putative mechanism for its development, and suggest that compounds that disperse DAT complexes may be efficacious treatments for cocaine addiction.