Shared posts

12 Dec 20:17

Transplanted embryonic neurons integrate into adult neocortical circuits

by Susanne Falkner

Transplanted embryonic neurons integrate into adult neocortical circuits

Nature 539, 7628 (2016). doi:10.1038/nature20113

Authors: Susanne Falkner, Sofia Grade, Leda Dimou, Karl-Klaus Conzelmann, Tobias Bonhoeffer, Magdalena Götz & Mark Hübener

The ability of the adult mammalian brain to compensate for neuronal loss caused by injury or disease is very limited. Transplantation aims to replace lost neurons, but the extent to which new neurons can integrate into existing circuits is unknown. Here, using chronic in vivo

12 Dec 20:15

A cannabinoid link between mitochondria and memory

by Etienne Hebert-Chatelain

A cannabinoid link between mitochondria and memory

Nature 539, 7630 (2016). doi:10.1038/nature20127

Authors: Etienne Hebert-Chatelain, Tifany Desprez, Román Serrat, Luigi Bellocchio, Edgar Soria-Gomez, Arnau Busquets-Garcia, Antonio Christian Pagano Zottola, Anna Delamarre, Astrid Cannich, Peggy Vincent, Marjorie Varilh, Laurie M. Robin, Geoffrey Terral, M. Dolores García-Fernández, Michelangelo Colavita, Wilfrid Mazier, Filippo Drago, Nagore Puente, Leire Reguero, Izaskun Elezgarai, Jean-William Dupuy, Daniela Cota, Maria-Luz Lopez-Rodriguez, Gabriel Barreda-Gómez, Federico Massa, Pedro Grandes, Giovanni Bénard & Giovanni Marsicano

Cellular activity in the brain depends on the high energetic support provided by mitochondria, the cell organelles which use energy sources to generate ATP. Acute cannabinoid intoxication induces amnesia in humans and animals, and the activation of type-1 cannabinoid receptors present at brain mitochondria membranes (mtCB1) can directly alter mitochondrial energetic activity. Although the pathological impact of chronic mitochondrial dysfunctions in the brain is well established, the involvement of acute modulation of mitochondrial activity in high brain functions, including learning and memory, is unknown. Here, we show that acute cannabinoid-induced memory impairment in mice requires activation of hippocampal mtCB1 receptors. Genetic exclusion of CB1 receptors from hippocampal mitochondria prevents cannabinoid-induced reduction of mitochondrial mobility, synaptic transmission and memory formation. mtCB1 receptors signal through intra-mitochondrial Gαi protein activation and consequent inhibition of soluble-adenylyl cyclase (sAC). The resulting inhibition of protein kinase A (PKA)-dependent phosphorylation of specific subunits of the mitochondrial electron transport system eventually leads to decreased cellular respiration. Hippocampal inhibition of sAC activity or manipulation of intra-mitochondrial PKA signalling or phosphorylation of the Complex I subunit NDUFS2 inhibit bioenergetic and amnesic effects of cannabinoids. Thus, the G protein-coupled mtCB1 receptors regulate memory processes via modulation of mitochondrial energy metabolism. By directly linking mitochondrial activity to memory formation, these data reveal that bioenergetic processes are primary acute regulators of cognitive functions.

29 Nov 23:06

A paper trail that’s never checked might as well not exist

by Scott

Update and Action Item: Just since late this afternoon, the Jill Stein campaign has already raised more than $1 million toward requesting hand recounts in Pennsylvania, Michigan, and Wisconsin.  Their target is $6-7 million.  I just donated what I could; if you agree with this post, then please do the same.  It doesn’t matter at this point if you disagree with Stein, or even (like me) think she shouldn’t have run: the goal is just to get a recount to happen before the deadline expires.

Another Update (11/24): In an amazing demonstration of the power of online fundraising, the Stein campaign has already, in less than 24 hours, raised the $2.5 million needed to fund a recount in Wisconsin.  Now they’re working on Pennsylvania and Michigan.  Amusing that Stein seems finally to have found a winning cause: Hillary!  (“Fighting for Hillary even when Hillary won’t fight for herself.”)  Again: please donate here.

Third Update (11/25):  The recount is on is Wisconsin!  The Stein campaign hasn’t yet filed in Pennsylvania or Michigan, but will do so next.  So, all the commenters who came here to explain to me that this was a scam, no judge would it allow it to go forward, etc.: please update your priors.  And next time, if you won’t listen to me, at least listen to Alex Halderman…


This will probably be my last election-related post.  After this (assuming, of course, that the effort I’m writing about fails…), I plan to encase myself in a bubble, stop reading news, and go back to thinking about quantum lower bounds, as if we still lived in a world where it made sense to do so.  But this is important.

As many of you have probably seen, several of the US’s top computer security experts, including my former MIT colleague Ron Rivest and my childhood friend Alex Halderman, have publicly urged that an audit of the US election take place.  But time is quickly running out.  If, for example, the Clinton campaign were to request a hand recount, the deadlines would be this Friday in Wisconsin, Monday in Pennsylvania, and next Wednesday in Michigan.  So far, alas, the Clinton campaign seems to have shown little interest, which would leave it to one of the third-party candidates to request a recount (they have the legal right too, if they can come up with the money for it).  In the meantime, I urge everyone to sign a petition demanding an audit.

For me, the key point is this: given the proven insecurity of electronic voting machines, an audit of paper ballots ought to be completely routine, even if there weren’t the slightest grounds for suspicion.  In this particular case, of course, we know for a fact (!!) that Russian intelligence was engaging in cyber-warfare to influence the US election.  We also know that Russia has both the will and the technological ability to tamper with foreign elections using vote-stealing malware—indeed, it nearly succeeded in doing so in Ukraine’s 2014 election.  Finally, we know that Trump, despite losing the popular vote, surprised just about everyone by outperforming his polls in three crucial swing states—and that within those states, Trump did systematically better in counties that relied on electronic voting machines than in counties that used scanners and paper ballots.

Nate Silver has tweeted that he sees no evidence of foul play, since the discrepancy disappears once you control for the education level of the counties (for more, see this FiveThirtyEight article).

But that’s the thing.  In a sane world, skeptics wouldn’t need to present statistical proof of foul play in order to trigger a hand count.  For if enemy actors know that, in practice, hand counts are never going to happen, then they’re free to be completely brazen in tampering with the childishly-insecure electronic voting machines themselves.  If no one ever looks at them, then the paper records might as well not exist.

Would anyone in the 1950s or 60s have believed that, a half-century hence, Russia actually would acquire the terrifying power over the US that the right-wing Cold Warriors once hyperventilated about—sometimes choosing to exercise that power, sometimes not—and that 2016’s conservatives would either shrug or welcome the development, while the only people who wanted to take reasonable precautions were a few rabble-rousing professors and activists?

Fate has decided that we should live in a branch of the wavefunction where the worst triumph by flaunting their terribleness and where nothing makes sense.  But however infinitesimal the chances anyone will listen, we should still insist that the sensible things be done—if nothing else, then simply as a way to maintain our own mental connection to the world of sense.

Happy Thanksgiving.

29 Nov 17:36

Box Cutter Stats

by Cathy O'Neil, mathbabe

Yesterday I heard a segment on WNYC on the effort to decriminalize box cutters in New York State. I guess it’s up to Governor Cuomo to sign it into law.

During the segment we hear a Latino man who tells his story: he was found by cops to be in possession of a box cutter and spent 10 days in Rikers. He works in construction and having a box cutter is literally a requirement for his job. His point was that the law made it too easy for people with box cutters to end up unfairly in jail.

It made me wonder, who actually gets arrested for possession of box cutters? I’d really like to know. I’m guessing it’s not a random selection of “people with box cutters.” Indeed I’m pretty sure this is almost never a primary reason to arrest a white person at all, man or woman. It likely only happens to people after being stopped and frisked for no particular good reason, and that’s much more likely to happen to minority men. I could be wrong but I’d like to see those stats.

It’s part of a larger statistical question that I think we should tackle: what is the racial discrepancy in arrest rates for other crimes, versus the population that actually commits those other crimes? I know for pot possession it’s extremely biased against blacks:marijuana_use_rate_by_race_yearmarijuana_arrest_rates_by_race_year

On the other end of the spectrum, I’d guess murder arrests are pretty equally distributed by race relative to the murdering population. But there’s all kinds of crimes in between, and I’d like some idea of how racially biased the arrests all are. In the case of box cutters, I’m guessing the bias is even stronger than for pot possession.

If w had this data, a statistician could mock up a way to “account for” racial biases in police practices for a given crime record, like we do in polling or any other kind of statistical analysis.

Not that it’s easy to collect; this is honestly some of the most difficult “ground truth” data you can imagine, almost as hard as money in politics. Still, it’s interesting to think about.


28 Nov 22:37

Two-headed snake

by Minnesotastan

Found in Croatia.  Photo credit Vedran Glavan/Barcroft, via The Telegraph.
26 Nov 21:45

Who Ate The Pie?

by Blair

Who ate the pie

When I began this blog I agreed with the Sanskrit poet who said the greatest wonder of language is its ability to make public what is in our hearts. Work on this blog, however, has made me consider a second candidate for wonder: the transitive clause. For those who slept through that grammar lesson, a transitive clause combines two things into a single action: e.g., The lion stalked the zebra; I ate the pie; The arc of history bends toward justice. Some animals can voice emotions, but no other animal or mathematical communication system can use verbs to unite subjects with objects.

Equally remarkable is the fact that all natural languages allow for transitive clauses. There are languages that do not distinguish between male and female pronouns, and there are languages that pluralize trees in a different way than they pluralize books, but every language gives its speakers a way to say I ate the pie.

Yet languages do not require that transitive clauses have the same structure. I in English is called the subject (S). It could also be called the agent or doer. Meanwhile, the pie is called the object (O) or direct object or patient. Finally, there is a verb (V), ate. The standard English structure of a transitive clause is SVO for [I] [ate] [the pie].

Mathematically,  there are 5 other ways the clause could be structured:

  • SOV – I the pie ate.
  • OVS – The pie ate I.
  • OSV – The pie I ate.
  • VSO – Ate I the pie.
  • VOS – Ate the pie I.

Are any of these structures more natural than others?

A recent paper by Irit Meir and 9 other authors investigates the issue of the structure of transitive clauses (“The Effect of Being Human and the Basis of Grammatical Word Order,” Cognition, abstract here). They report that out of a survey of 1,377 languages, 85% (or 1,186) have a dominant structure in their transitive clauses. The distribution of structures Is:

  • SOV – I the pie ate; 565 (48%).
  • SVO – I ate the pie; 488 (41%).
  • VSO – Ate I the pie; 95 (8%).
  • OVS – The pie ate I
  • OSV – The pie I ate.
  • VOS – Ate the pie I.

Every possible combination is found in some language or other, but 97% of those with a dominant pattern normally place the subject before the object. These observations have led many linguists to propose that it is natural to place the subject first. That finding gives extra support for the idea that grammatical rules, from the beginning, shape language form. Meir and the other authors have challenged this position by having experiments in which communicators invent sign or spoken languages to describe the actions observed in short video clips. They found that they get more varied structures when both the subject and object are humans (e.g., Harry cheated Donald). If the object is inanimate (e.g., pie) and the subject is human, clause structure is more fixed.

This finding seems to contradict commonsense. Normally, we expect a human to do something to an inanimate object, so even if I say The pie ate I, you can guess that it was the pie that got eaten. Speakers would seem to have more rhetorical freedom to fiddle with grammatical relations about pie than if I said Nancy punched Jane. Without some kind of settled convention, the listener will have a hard time sorting puncher from punchee. Yet this contradiction is exactly what the authors found.

The authors argue, therefore, that investigators are mistaken when they look for grammatical explanations for the structure of transitive clauses. They conclude that “the main factor at work [in the experiments] is the conceptual salience of the participants” (p. 203). The what?

Conceptual salience refers to how likely something is to grab our attention. If you are looking at a scene, there is some sort of hierarchy of objects most likely to catch your attention; in most cases, the hierarchy is likely to be people first, then animals, then moving inanimate objects, then still objects.  Thus, if you are looking at a scene out west you might first notice the cowboys, then the cattle, then the tumbling tumble weeds, then the rocks. Thus, if you see Harry cheat Donald, there is less of an automatic point to grab your attention. If Donald catches your eye first, you might report Donald was cheated by Harry or Harry might grab your attention and you name him first. There will be variety in your choice of what to name first. But if you see Harry eat a pie, the human Harry is much more likely to dominate your attention and come first in your report on what happened.

Regulars on this blog will know why the study is so attractive. Language, it says here, is a product of attention. Words get their meaning by piloting attention. Syntax gets its structure by shifting attention. And now we have evidence that the speaker’s rhetorical choices also reflect a hierarchy of attention. Of course, I’m pleased. At the same time, I do not want to be carried away. Lately, psychology has been plagued by a series of unreproducible findings, so it would be nice to have other researchers confirm the basic observations of these authors. Also, it would be interesting to discover if there are predictable deviations in the language of people with different hierarchies of attention. For example, many people with autism pay more attention to inanimate things than to humans. Does their use of transitive clauses reflect this difference?

17 Nov 23:28

What is there to say?

by Scott

Update (Nov. 10): In the wake of the US’s authoritarian takeover, I will sadly understand if foreign students and postdocs no longer wish to study in the US, or if foreign researchers no longer wish to enter the US even for conferences and visits. After all, I wouldn’t feel safe in Erdogan’s Turkey or the Mullahs’ Iran. In any case, I predict that the US’s scientific influence will now start to wane, as top researchers from elsewhere find ways to route around us.

I make just one request: if you do come to the US (as I selfishly hope you will), please don’t avoid places like Austin just because they look on the map like they’re in a sea of red. To understand what’s going on, you need to look at the detailed county-by-county results, which show that even in “red” states, most cities went overwhelmingly for Clinton, while even in “blue” states like New York, most rural areas went for Trump. Here’s Texas, for example (Austin was 66% Clinton, 27% Trump).


I’m ashamed of my country and terrified about the future.  When Bush took power in 2000, I was depressed for weeks, but I didn’t feel like I do now, like a fourth-generation refugee in the United States—like someone who happens to have been born here and will presumably continue to live here, unless and until it starts to become unsafe for academics, or Jews, or people who publicly criticize Trump, at which time I guess we’ll pack up and go somewhere else (assuming there still is a somewhere else).

If I ever missed the danger and excitement that so many European scientists and mathematicians felt in the 1930s, that sense of trying to pursue the truth even in the shadow of an aggressive and unironic evil—OK, I can cross that off the list.  Since I was seven years old or so, I’ve been obsessed by the realization that there are no guardrails that prevent human beings from choosing the worst, that all the adults who soothingly reassure you that “everything always works out okay in the end” are full of it.  Now I get to live through it instead of just reading about it in history books and having nightmares.

If James Comey hadn’t cast what turned out to be utterly unfounded suspicion over Hillary during the height of early voting, maybe the outcome would’ve been different.  If young and poor and minority voters in Wisconsin and North Carolina and elsewhere hadn’t been effectively disenfranchised through huge lines and strategic voter ID laws and closures of polling places, maybe the outcome would’ve been different.  If Russia and WikiLeaks hadn’t interfered by hacking one side and not the other, maybe the outcome would’ve been different.  For that matter, if Russia or some other power hacked the trivially-hackable electronic voting machines that lack paper trails—machines that something like a third of American voters still used this election—there’s an excellent chance we’d never find out.

But in some sense, all of that is beside the point.  For take all of it away, and Trump still would’ve at least come within a few terrifying points of winning—and as Scott Alexander rightly stresses, whatever horrible things are true about the American electorate today, would still have been true had Hillary eked out a narrow win.  It’s just that now we all get to enjoy the consequences of ½±ε of the country’s horrible values.

There is no silver lining.  There’s nothing good about this.

My immediate problem is that, this afternoon, I’m supposed to give a major physics colloquium at UT.  The title?  “Quantum Supremacy.”  That term, which had given me so much comedic mileage through the long campaign season (“will I disavow support from quantum supremacists?  I’ll keep you in suspense about it…” ), now just seems dark and horrible, a weight around my neck.  Yet, distracted and sleep-deprived and humor-deprived though I am, I’ve decided to power through and give the talk.  Why?  Because Steven Weinberg says he still wants to hear it.

I see no particular reason to revise anything I’ve said on this blog about the election, except perhaps for my uncritical quoting of all the analyses and prediction markets that gave Trump a small (but still, I stressed, much too high) probability of winning.

I stand by my contempt for the Electoral College, and my advocacy for vote-swapping.  The fact that vote-swapping once again failed doesn’t mean it was a bad idea; on the contrary, it means that we didn’t do enough.

I stand by my criticism of some of the excesses of the social justice movement, which seem to me to have played some role in spawning the predictable backlash whose horrific results the world now sees.

Lastly, I stand by what I said about the centrality of Enlightenment norms and values, and of civil discourse even with those with whom we disagree, to my own rejection of Trumpism.

On the other hand, the Trump supporters who are leaving me anonymous taunting comments can go elsewhere.  On this day, I think a wholly appropriate Enlightenment response to them is “fuck you.”

14 Nov 03:31

Astrophysics

DEPARTMENT OF NEUROSCIENCE / Motto: "If I hear the phrase 'mirror neurons' I swear to God I will flip this table."
01 Nov 18:05

Solve for the equilibrium are British people really like this?

by Tyler Cowen

The first self-driving cars to be operated by ordinary British drivers will be left deliberately unmarked so that other drivers will not be tempted to “take them on”, a senior car industry executive has revealed.

One of the biggest fears of an ambitious project to lease the first autonomous vehicles to everyday motorists is that other road users might slam on their brakes or drive erratically in order to force the driverless cars into submission, he said.

This is why the first 100 self-driving 4×4 vehicles to be leased to motorists as part of a pilot scheme on busy main roads into London will look no different than other Volvos of the same model, said Erik Coelingh, senior technical leader at Volvo Cars. The scheme will start in 2018.

Americans wouldn’t talk this way:

One driver interviewed for the survey said: “I’ll be overtaking all the time because they’ll be sticking to the rules.”

Another said: “They are going to stop. So you’re going to mug them right off. They’re going to stop and you’re just going to nip around.”

Here is more, via Michelle Dawson.

Addendum: Via Anecdotal, here is an Australian perspective:

Well, I am here to tell you: that’s OK. We’ve all had it drummed into us from infancy that humans bullying cars = bad.

But we can’t let our bourgeois notions of propriety in auto-human interactions stop us from letting out our inner Johnny from Karate Kid.

We must, rather, get on with the vital and necessary work of bullying, haranguing and insulting these contraptions every chance we get. Because I cannot stress this enough: these cars must not be allowed to develop self-esteem.

From another corner of the world, I can tell you that Kiwis do not drive as politely as they talk.

The post Solve for the equilibrium are British people really like this? appeared first on Marginal REVOLUTION.

01 Nov 12:12

PCA meets RG. (arXiv:1610.09733v1 [physics.bio-ph])

by Serena Bradde, William Bialek

A system with many degrees of freedom can be characterized by a covariance matrix; principal components analysis (PCA) focuses on the eigenvalues of this matrix, hoping to find a lower dimensional description. But when the spectrum is nearly continuous, any distinction between components that we keep and those that we ignore becomes arbitrary; it then is natural to ask what happens as we vary this arbitrary cutoff. We argue that this problem is analogous to the momentum shell renormalization group (RG). Following this analogy, we can define relevant and irrelevant operators, where the role of dimensionality is played by properties of the eigenvalue density. These results also suggest an approach to the analysis of real data. As an example, we study neural activity in the vertebrate retina as it responds to naturalistic movies, and find evidence of behavior controlled by a nontrivial fixed point. Applied to financial data, our analysis separates modes dominated by sampling noise from a smaller but still macroscopic number of modes described by a non--Gaussian distribution.

22 Oct 22:50

[In Depth] Odd computer zips through knotty tasks

by Adrian Cho
A century-old theoretical model of magnetism is giving rise to a hybrid computer, part classical and part quantum, that may capable of solving problems that overwhelm conventional computers. The so-called Ising machine, described in 100-bit and 2000-bit versions in two reports this week in Science, could tackle optimization problems that require finding the best solution among myriad possibilities, such as predicting how a protein will fold or allotting bandwidth in cellular communications networks. The machines take their name from the Ising model, which was developed in 1920 in an attempt to explain magnetism. Curiously, many optimization problems can be mapped onto the Ising model. Now, two overlapping groups at Stanford University in Palo Alto, California, and at NTT Basic Research Laboratories in Atsugi, Japan, have developed optical machines specifically designed to solve the model, at least approximately. Developers hope Ising machines may soon replace or aid conventional computers for some applications, although some researchers note it is not yet clear that the new machines can best ordinary computers. Author: Adrian Cho
21 Oct 21:48

Will Maine Be the First State Where Voters May Rank Their Choices?

by Scott Shackford

ballotWhat if you really didn't have to accept that there are only two valid choices for a particular race, and your third-party vote actually mattered more than as just a protest?

Maine voters may find out for themselves. On their ballot this November is Question 5, a ballot initiative that would institute ranked-choice voting for statewide positions like governor and for lawmakers on both the state and federal levels.

In a ranked vote system, voters are invited not to just check off the box for their favorite candidate; they're allowed to rank each candidate in order of preference. If the winning candidate doesn't get a majority of the votes, there's an "instant runoff." The candidate with the least votes is dumped from the race and the votes are counted again. On the ballots of those who voted for the least-popular candidate, their second choice is now counted as their vote. If again the winning candidate still doesn't get a majority of the votes, the cycle continues until the top-ranked candidate doesn't get just the most votes but a majority of votes.

No state here currently has such a voting system, but some cities do, and it's how Australia elects its lawmakers. Australia's complicated, preference-based voting system has resulted in several lawmakers who are members of smaller parties, including libertarian David Leyonhjelm. That is partly the intent of this system: To make it more possible for third-party candidates to break through the electoral duopoly, but only in situations where the majority of voters reject what the establishment offers.

The editorial board of the Portland Press Herald endorsed Question 5 last week with the awareness that an increasing number of voters are refusing to identify as Democrats or Republicans:

Our current system took shape when there were two strong parties that dominated the political process. Parties won elections by assembling coalitions and selecting candidates who had broad appeal. It was hard for fringe elements to break through.

But even though Maine's political parties have been in decline for decades, they still have an outsized influence on the process. Nominees selected by the small number of committed partisans who show up to vote in June have enormous institutional advantages on Election Day in November.

That puts the largest group of voters, those who are not active as either Democrats or Republicans, in a bind.

They have no say in the selection of a party nominee, but they can't vote for a third-party candidate without risking a vote for a "spoiler" who fragments opposition and gives an extreme candidate a path to victory.

What if, for example, you could vote for Gary Johnson as your first choice, but thought that Hillary Clinton would be much less dangerous as president than Donald Trump (or vice-versa)? You could make Johnson your first choice and Clinton your second. Thus, you'd be shutting down any arguments (or even your own fears) that a vote for a third-party candidate was ultimately helping Trump (or Clinton) win.

Heck, given the unpopularity of Clinton and Trump and the way polls are going, it is likely that the winner in November will get a plurality of the votes, not a majority. A ranked system significantly favors third-party candidates in situations where voters are really unhappy with what the establishment has to offer. It's easy to imagine Johnson becoming the second choice for a good chunk of voters, and then imagine what could happen next if neither Clinton nor Trump gets 51 percent of the majority vote.

It shouldn't come as a surprise then that Johnson supporter and former Nirvana bassist Krist Novoselic is a big endorser of this kind of voting system. And he puts his activism where his mouth is: He's the chairman of the Board of FairVote, a nonprofit group pushing for more proportional voting systems such as Maine's proposed ranked-choice method.

The ranked-choice system comes with its own flaws. One study pointed out that even in a ranked-choice election, the winner may not actually have gotten the majority vote in the end. That's because voters don't have to rank all candidates. They can just rank the one (or ones) they legitimately want to vote for. If that candidate gets the least amount of votes and cut from the instant run-off, the vote becomes meaningless. So in the presidential example, if we had a four-way race including Green Party candidate Jill Stein, it's likely Stein would get the least amount of votes (based on current polls) and would be cut in the next round of vote tabulations. That means every vote that was only for Stein and did not rank other candidates would get tossed and would no longer "count." The outcome may sometimes be that the winner still really only got a plurality of the votes, not a majority. But it may well be a different person than who got the plurality of votes in a simple count.

But ultimately it's hard to explain why this is any different from the winner-takes-all system we have now. All votes for Stein are likely to be irrelevant in this election. Most votes for Johnson probably won't "matter" either, except maybe in a couple of pivotal states. At least in a ranked system, third-party voters could express their primary choice but also, if they wanted to, register a "lesser of two evils" vote for a Democrat and Republican and feel maybe slightly less disappointed in the election outcome.

Right now two polls show Question 5 receiving more support than opposition, but the most recent poll has a very high number of undecided voters (23 percent). This will be a ballot initiative to watch on Election Day.

Below, watch ReasonTV's interview with Novoselic in 2014 where he talks extensively about improving electoral representation:

18 Oct 15:16

The wisdom of Interfluidity

by Tyler Cowen
Nosimpler

Are they not paying attention? This has already happened.

It is tempting, among those of us who would be appalled by a Trump victory, to try to sway undecided voters by equating voting for Trump with racism full-stop. That’s a bad idea. If it becomes the mainstream view that Trump voters are simply racists, it leaves those who are already committed, those who are unwilling to abandon Trump or to stomach Clinton, little choice but to own what they’ve been accused of. Racist is the new queer. The same daring, transgressional psychology that, for gay people, converted an insult into a durable token of identity may persuade a mass of people who otherwise would not have challenged the social taboo surrounding racism to accept the epithet with defiant equanimity or even to embrace it. The assertion that Trump’s supporters are all racists has, I think, become partially self-fulfilling. In and of itself, that will make America’s already deeply ugly racial politics uglier. It will help justify the further pathologization of the emerging white underclass while doing nothing at all to help communities of color except, conveniently for some, to set the groups at one another’s throats so they cannot make common cause. It will become yet another excuse for beneficiaries of economic stratification to blame its victims. Things were bad before this election. They are worse now, and we should be very careful about how we carry this experience forward. These are frightening times.

Here is more, interesting throughout.

The post The wisdom of Interfluidity appeared first on Marginal REVOLUTION.

16 Oct 01:10

Piketty, Housing, and Capital Share

by Alex Tabarrok

Gianni La Cava has a very interesting article (based on a longer paper) on what accounts for the rising share of capital in the income accounts:

A key observation in Thomas Piketty’s Capital in the Twenty-First Century (Piketty 2014) is that the share of aggregate income accruing to capital in the US has been rising steadily in recent decades. The growing disparity between the income going to wage earners and capital owners has led to calls for government intervention. But for such interventions to be effective, it is important to ask who the capital owners are.

Recent research has shown that the long-run rise in the net capital income share is mainly due to the housing sector (e.g. Rognlie 2015, Torrini 2016 – see Figure 1). This phenomenon is not specific to the US but has been evident in almost every advanced economy. This suggests that it is not entrepreneurs and venture capitalists that are taking an increasing share of the economy, but land owners.

…The decomposition of the national accounts by type of housing indicates that the secular rise is mainly due to a rising share of imputed rent going to owner-occupiers. The owner-occupier share of aggregate income has risen from just under 2% in 1950 to close to 5% in 2014 (top panel of Figure 2). The share of income going to landlords (i.e. market rent) has also doubled in the post-war era. But, in aggregate, the effect of imputed rent is larger simply because there are nearly twice as many home owners as renters in the US economy. A similar phenomenon is observed in the personal consumption expenditure data (bottom panel of Figure 2). In other words, today’s landed gentry are predominantly home owners, not private landlords.

…The geographic decomposition reveals that the long-run rise in the housing capital income share is fully concentrated in states that face housing supply constraints. To see this, I divide the states into ‘elastic’ and ‘inelastic’ groups based on whether the state is above or below the median housing supply elasticity index (as measured by Saiz 2010). This index captures both geographical and regulatory constraints on home building across different US regions. For 50 years, the share of total housing capital income going to the supply-elastic states has been unchanged at about 3% of GDP (Figure 3). In contrast, the share going to the supply-inelastic states has risen from around 5% in the 1960s to 7% of GDP more recently. Notably, these divergent trends in housing capital income are not due to a few ‘outlier’ states where housing supply is particularly constrained, such as New York or California – instead, there is a clear negative correlation between the long-run growth in housing capital income and the extent to which housing supply is constrained across all states (Figure 4).

The post Piketty, Housing, and Capital Share appeared first on Marginal REVOLUTION.

13 Oct 13:02

What if dark matter is not a particle? The second wind of modified gravity.

by Sabine Hossenfelder
Another year has passed and Vera Rubin was not awarded the Nobel Prize. She’s 88 and the prize can’t be awarded posthumously, so I can’t shake the impression the Royal Academy is waiting for her to die while they work off a backlog of condensed-matter breakthroughs. Sure, nobody knows whether galaxies actually contain the weakly interacting and non-luminous particles we have come to call dark
25 Sep 21:02

Fashion Police and Grammar Police

* Mad about jorts
21 Sep 18:33

But

by noreply@blogger.com (Atrios)
Perhaps stop calling them self-driving then?
Uber provided ride-alongs to reporters on Tuesday. During a ride of about one hour, Reuters observed the Uber car safely - and for the most part smoothly - stop at red lights and accelerate at green lights, travel over a bridge, move around a mail truck and slow for a driver opening a car door on a busy street. All without a person touching the controls.

But the Uber driver and the engineer in the front two seats did intervene every few miles.

The technology is neat, but unless you get to 100% it just isn't more than neat, and it certainly won't do a damn thing for car services like Uber. Segways are neat, too.
20 Sep 22:47

The Lamellar Structure of the Brain Fiber Pathways

by Vitaly L. Galinsky
Neural Computation, Volume 28, Issue 11, Page 2533-2556, November 2016.
18 Sep 17:46

A solid gold toilet

by Minnesotastan

And fully functional.
New York’s Guggenheim museum unveiled its latest installation on Friday – a solid gold toilet titled America.

The toilet, which the Guardian can confirm is fully functioning, is the work of Italian artist Maurizio Cattelan...

Visitors to the museum are able to use the golden toilet in much the same way as they would use a normal toilet. It is located in a standard, pre-existing bathroom on the fourth floor of the museum, a small placard the only indication of its presence...

Nathan Otterson, senior conservator, objects at the Guggenheim, is responsible for maintaining the toilet. He said a cleaning crew will attend to the toilet every 15 minutes... “We would hope no one would try to remove part of the toilet,” Otterson said. 
Image cropped for size from the original (credit William Edwards/AFP/Getty Images).
16 Sep 23:09

NYPD: We Don’t Know How Much Cash We Seize, And Our Computers Would Crash If We Tried To Find Out

by C.J. Ciaramella

NYPD brass testified before the New York City Council Thursday that it has no idea how much money it seizes from citizens each year using civil asset forfeiture, and an attempt to collect the data would crash its computer systems, The Village Voice reported.

Concerned by the lack of transparency surrounding the NYPD's civil forfeiture program, NYC councilmember Ritchie Torres introduced legislation this year that would require annual reports from the police department about how much money it seizes, but at Thursday's hearing, the NYPD said it has no technologically feasible way to track seized money that was ultimately not pursued through asset forfeiture. From The Village Voice:

"Attempts to perform the types of searches envisioned in the bill will lead to system crashes and significant delays during the intake and release process," said Assistant Deputy Commissioner Robert Messner, while testifying in front of the council's Public Safety Committee. "The only way the department could possibly comply with the bill would be a manual count of over half a million invoices each year."

When asked by councilmember Dan Garodnick whether the NYPD had come to the hearing with any sort of accounting for how much money it has seized from New Yorkers this past year, the NYPD higher-ups testifying simply answered "no."

According to the Voice, the NYPD "claimed that it only legally forfeited $11,653 in currency last year — that is, gone to court and actually made a case as to why the NYPD should be taking this money."

As I reported last month, Bronx Defenders, a legal aid group, is suing the NYPD for public records on its asset forfeiture program, which rakes in millions in seized cash and property from arrests every year. According to the scant records Bronx Defenders did manage to get back, the NYPD reported more than $6 million in revenue in 2013 from seized cash, forfeitures, and property sold at auction, and it had a balance of more than $68 million in seized currency in any given month of that year. Bronx Defenders say the records indicate that the vast majority of seized assets are simply forfeited by default after the deadline passes for the property owner fails to go through the burdensome and Byzantine process of trying to retrieve them by the deadline.

The NYPD's "unclaimed cash and property" sales totaled $6.5 million in 2014 and more than $7 million in 2015, according to the Bronx Defender lawsuit.

Back to The Village Voice:

Testifying in front of committee members, lawyers and advocates from the Bronx Defenders, Brooklyn Defender Services, and Legal Aid Society relayed a litany of horrors experienced by clients who had become trapped by the NYPD's unjust civil forfeiture process. Anca Grigore, a staff attorney at Brooklyn Defender Services, told the story of one client who was arrested while a passenger in another person's car. The NYPD then went to the BDS client's home, told their brother that they needed to move the client's own car because it was blocking a driveway, and then seized the car. The BDS client eventually paid $500 to get the car back from the NYPD six months later. In another case, a BDS client had hundreds of dollars vouchered under the name of a co-defendant, whose criminal case was ongoing — the process took months to play out, and the client had to jump through several hoops just to prove the money was hers.

15 Sep 23:10

Suggestions

by noreply@blogger.com (Atrios)
Nosimpler

Hehe. "It's true The Kids Today don't major in anything useful, unlike the Greatest Generation and the Second Greatest Generation and the Pretty Good Generation After That who all had degrees in Physics or Engineering before going on to work in opinion journalism"

I spent the early afternoon consulting with the Evil League of Evil Labor Economists, and here were their suggestions for how to attract workers if you're having some trouble doing so.

a) Offer a higher wage. Yes, yes, wages, are so 20th century, but The Kids Today have a strange affinity for them. Maybe it's nostalgia. They even want dollars, and not Bitcoins or Applebucks.

b) Good health insurance. Weirdly, even though President Kenyan Muslim Socialist nationalized the health care industry, some Kids Today just aren't happy with their death panels and want their companies to buy redundant policies for them. Silly Kids Today.

c) Retirement benefits. Yes, yes, they've been watching too many old movies. Silent movies, starring Steve Simels mostly. But The Kids Today have these romantic notions that come retirement they won't have to do a Logan's Run or face their prescribed destiny. They've been told their whole lives that Social Security just won't be there for them so they need an alternative. Blame whoever keeps telling them that (shhh!!!!).

d) On the job training. It's true The Kids Today don't major in anything useful, unlike the Greatest Generation and the Second Greatest Generation and the Pretty Good Generation After That who all had degrees in Physics or Engineering before going on to work in opinion journalism, but that's the labor force you have to deal with. Might have to show them a thing or two.

e) Job Security. The Kids Today would like some assurances that their jobs might be around a few months hence. Yes that goes against everything the Washington Post opinion page has been telling you. Tenure of any kind is anathema to them. The job turnover there is brutal. It seems that every 20 or 30 years or so there's a new columnist!

It's going to take a lot to lure them from their parents' basements. But, sadly, that pesky 13th Amendment was ratified, or so they claim, so if you want The Kids Today to come work for you, sacrifices must be made.




15 Sep 16:29

How to test for partially predictable chaos. (arXiv:1605.05616v4 [nlin.CD] UPDATED)

by Hendrik Wernecke, Bulcsú Sándor, Claudius Gros

For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation may split into an initial exponential decrease, characterized by the maximal Lyapunov exponent, and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. The time scales of both processes can be either of the same or of very different orders of magnitude. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall extent of the attractor) for exceedingly long times and therefore remain partially predictable.

Tests for distinguishing chaos from laminar flow widely use the time evolution of inter-orbital correlations as an indicator. Standard tests however yield mostly ambiguous results when it comes to distinguish partially predictable chaos and laminar flow, which are characterized respectively by attractors of fractally broadened braids and limit cycles. For a resolution we introduce a novel 0-1 indicator for chaos based on the cross-distance scaling of pairs of initially close trajectories, showing that this test robustly discriminates chaos, including partially predictable chaos, from laminar flow. One can use furthermore the finite time cross-correlation of pairs of initially close trajectories to distinguish, for a complete classification, also between strong and partially predictable chaos. We are thus able to identify laminar flow as well as strong and partially predictable chaos in a 0-1 manner solely from the properties of pairs of trajectories.

13 Sep 23:22

Looking for symmetry: fixational eye movements are biased by image mirror symmetry

by Meso, A. I., Montagnini, A., Bell, J., Masson, G. S.

Humans are highly sensitive to symmetry. During scene exploration, the area of the retina with dense light receptor coverage acquires most information from relevant locations determined by gaze fixation. We characterized patterns of fixational eye movements made by observers staring at synthetic scenes either freely (i.e., free exploration) or during a symmetry orientation discrimination task (i.e., active exploration). Stimuli could be mirror-symmetric or not. Both free and active exploration generated more saccades parallel to the axis of symmetry than along other orientations. Most saccades were small (<2°), leaving the fovea within a 4° radius of fixation. Analysis of saccade dynamics showed that the observed parallel orientation selectivity emerged within 500 ms of stimulus onset and persisted throughout the trials under both viewing conditions. Symmetry strongly distorted existing anisotropies in gaze direction in a seemingly automatic process. We argue that this bias serves a functional role in which adjusted scene sampling enhances and maintains sustained sensitivity to local spatial correlations arising from symmetry.

13 Sep 21:23

Earth Temperature Timeline

[After setting your car on fire] Listen, your car's temperature has changed before.
11 Sep 17:10

Struggles with the Continuum (Part 1)

by John Baez
Nosimpler

Warning I'm going to share 90% of the stuff John Baez posts.

Is spacetime really a continuum? That is, can points of spacetime really be described—at least locally—by lists of four real numbers (t,x,y,z)? Or is this description, though immensely successful so far, just an approximation that breaks down at short distances?

Rather than trying to answer this hard question, let’s look back at the struggles with the continuum that mathematicians and physicists have had so far.

The worries go back at least to Zeno. Among other things, he argued that that an arrow can never reach its target:

That which is in locomotion must arrive at the half-way stage before it arrives at the goal.Aristotle summarizing Zeno

and Achilles can never catch up with a tortoise:

In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.Aristotle summarizing Zeno

These paradoxes can now be dismissed using our theory of real numbers. An interval of finite length can contain infinitely many points. In particular, a sum of infinitely many terms can still converge to a finite answer.

But the theory of real numbers is far from trivial. It became fully rigorous only considerably after the rise of Newtonian physics. At first, the practical tools of calculus seemed to require infinitesimals, which seemed logically suspect. Thanks to the work of Dedekind, Cauchy, Weierstrass, Cantor and others, a beautiful formalism was developed to handle real numbers, limits, and the concept of infinity in a precise axiomatic manner.

However, the logical problems are not gone. Gödel’s theorems hang like a dark cloud over the axioms of mathematics, assuring us that any consistent theory as strong as Peano arithmetic, or stronger, cannot prove itself consistent. Worse, it will leave some questions unsettled.

For example: how many real numbers are there? The continuum hypothesis proposes a conservative answer, but the usual axioms of set theory leaves this question open: there could vastly more real numbers than most people think. And the superficially plausible axiom of choice—which amounts to saying that the product of any collection of nonempty sets is nonempty—has scary consequences, like the existence of non-measurable subsets of the real line. This in turn leads to results like that of Banach and Tarski: one can partition a ball of unit radius into six disjoint subsets, and by rigid motions reassemble these subsets into two disjoint balls of unit radius. (Later it was shown that one can do the job with five, but no fewer.)

However, most mathematicians and physicists are inured to these logical problems. Few of us bother to learn about attempts to tackle them head-on, such as:

nonstandard analysis and synthetic differential geometry, which let us work consistently with infinitesimals,

constructivism, which avoids proof by contradiction: for example, one must ‘construct’ a mathematical object to prove that it exists,

finitism (which avoids infinities altogether),

ultrafinitism, which even denies the existence of very large numbers.

This sort of foundational work proceeds slowly, and is now deeply unfashionable. One reason is that it rarely seems to intrude in ‘real life’ (whatever that is). For example, it seems that no question about the experimental consequences of physical theories has an answer that depends on whether or not we assume the continuum hypothesis or the axiom of choice.

But even if we take a hard-headed practical attitude and leave logic to the logicians, our struggles with the continuum are not over. In fact, the infinitely divisible nature of the real line—the existence of arbitrarily small real numbers—is a serious challenge to almost all of the most widely used theories of physics.

Indeed, we have been unable to rigorously prove that most of these theories make sensible predictions in all circumstances, thanks to problems involving the continuum.

One might hope that a radical approach to the foundations of mathematics—such as those listed above—would allow avoid some of the problems I’ll be discussing. However, I know of no progress along these lines that would interest most physicists. Some of the ideas of constructivism have been embraced by topos theory, which also provides a foundation for calculus with infinitesimals using synthetic differential geometry. Topos theory and especially higher topos theory are becoming important in mathematical physics. They’re great! But as far as I know, they have not been used to solve the problems I want to discuss here.

Today I’ll talk about one of the first theories to use calculus: Newton’s theory of gravity.

Newtonian Gravity

In its simplest form, Newtonian gravity describes ideal point particles attracting each other with a force inversely proportional to the square of their distance. It is one of the early triumphs of modern physics. But what happens when these particles collide? Apparently the force between them becomes infinite. What does Newtonian gravity predict then?

Of course real planets are not points: when two planets come too close together, this idealization breaks down. Yet if we wish to study Newtonian gravity as a mathematical theory, we should consider this case. Part of working with a continuum is successfully dealing with such issues.

In fact, there is a well-defined ‘best way’ to continue the motion of two point masses through a collision. Their velocity becomes infinite at the moment of collision but is finite before and after. The total energy, momentum and angular momentum are unchanged by this event. So, a 2-body collision is not a serious problem. But what about a simultaneous collision of 3 or more bodies? This seems more difficult.

Worse than that, Xia proved in 1992 that with 5 or more particles, there are solutions where particles shoot off to infinity in a finite amount of time!

This sounds crazy at first, but it works like this: a pair of heavy particles orbit each other, another pair of heavy particles orbit each other, and these pairs toss a lighter particle back and forth. Xia and Saari’s nice expository article has a picture of the setup:

xia_construction

Each time the lighter particle gets thrown back and forth, the pairs move further apart from each other, while the two particles within each pair get closer together. And each time they toss the lighter particle back and forth, the two pairs move away from each other faster!

As the time t approaches a certain value t_0, the speed of these pairs approaches infinity, so they shoot off to infinity in opposite directions in a finite amount of time, and the lighter particle bounces back and forth an infinite number of times!

Of course this crazy behavior isn’t possible in the real world, but Newtonian physics has no ‘speed limit’, and we’re idealizing the particles as points. So, if two or more of them get arbitrarily close to each other, the potential energy they liberate can give some particles enough kinetic energy to zip off to infinity in a finite amount of time! After that time, the solution is undefined.

You can think of this as a modern reincarnation of Zeno’s paradox. Suppose you take a coin and put it heads up. Flip it over after 1/2 a second, then flip it over after 1/4 of a second, and so on. After one second, which side will be up? There is no well-defined answer. That may not bother us, since this is a contrived scenario that seems physically impossible. It’s a bit more bothersome that Newtonian gravity doesn’t tell us what happens to our particles when t = t_0.

Your might argue that collisions and these more exotic ‘noncollision singularities’ occur with probability zero, because they require finely tuned initial conditions. If so, perhaps we can safely ignore them!

This is a nice fallback position. But to a mathematician, this argument demands proof.

A bit more precisely, we would like to prove that the set of initial conditions for which two or more particles come arbitrarily close to each other within a finite time has ‘measure zero’. This would mean that ‘almost all’ solutions are well-defined for all times, in a very precise sense.

In 1977, Saari proved that this is true for 4 or fewer particles. However, to the best of my knowledge, the problem remains open for 5 or more particles. Thanks to previous work by Saari, we know that the set of initial conditions that lead to collisions has measure zero, regardless of the number of particles. So, the remaining problem is to prove that noncollision singularities occur with probability zero.

It is remarkable that even Newtonian gravity, often considered a prime example of determinism in physics, has not been proved to make definite predictions, not even ‘almost always’! In 1840, Laplace wrote:

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.Laplace

However, this dream has not yet been realized for Newtonian gravity.

I expect that noncollision singularities will be proved to occur with probability zero. If so, the remaining question would why it takes so much work to prove this, and thus prove that Newtonian gravity makes definite predictions in almost all cases. Is this is a weakness in the theory, or just the way things go? Clearly it has something to do with three idealizations:

• point particles whose distance can be arbitrarily small,

• potential energies that can be arbitrariy large and negative,

• velocities that can be arbitrarily large.

These are connected: as the distance between point particles approaches zero, their potential energy approaches -\infty, and conservation of energy dictates that some velocities approach +\infty.

Does the situation improve when we go to more sophisticated theories? For example, does the ‘speed limit’ imposed by special relativity help the situation? Or might quantum mechanics help, since it describes particles as ‘probability clouds’, and puts limits on how accurately we can simultaneously know both their position and momentum?

Next time I’ll talk about quantum mechanics, which indeed does help.


06 Sep 20:03

Is the brain prewired for letters?

by Stanislas Dehaene

Nature Neuroscience 19, 1192 (2016). doi:10.1038/nn.4369

Authors: Stanislas Dehaene & Ghislaine Dehaene-Lambertz

Even before a child learns to read, the future location of his or her letter-processing area can be predicted from its connections to the rest of the brain. Reading acquisition thus piggybacks on a pre-existing brain circuit.

03 Sep 23:59

Templeton News

by woit

Looking at my list of items to blog about, I see most of them have some relation to the Templeton Foundation, so this will be a blog post just about those. To get some idea of the scale of Templeton’s activities, at the end of 2014 they had about $3.2 billion in assets, and during 2014 had given away about $185 million. For comparison, the NSF budget for FY2014 for physics was $267 million and for mathematics $225 million.

One of the main goals of the foundation is to bring together science and religion. Among the many things they are funding to accomplish this is a $871,000 grant to Arizona State University to fund Think Write Publish Fellowships in Science and Religion. If you’re a hard-up writer, these people will give you the opportunity to get $10,000 to write “creative nonfiction stories about harmonies between science and religion” and help you get them published.

Over the next few years, as you see things like this make it into the media, realize that this is not evidence of an intellectual trend, but a reflection of Templeton money and their agenda. ASU’s Lawrence Krauss is, for good reason, not happy.

To give an idea of the range of Templeton’s influence, just at ASU they’re funding several other large grants, including $745,000 for Representations of God (this and this), and $544,000 for emergent gravity. When you notice conferences, seminars, public lectures, etc. about “emergent gravity” in coming years, realize that some of them are happening because of Templeton’s agenda (one of the PIs is a Templeton Prize winner).

One of Templeton’s largest recent grants has been $4.7 million to FQXI for research into “Physics of the Observer”. Among other things, this funded a recent conference at Banff.

A major interest of Templeton’s over the years has been “Genius”. Another of their large recent grants has been to the World Science Foundation for its Cultivating Genius Initiative.

Finally, there will be an interesting mathematics conference related to quantum field theory at Harvard October 8-10. I’ll likely be up in Boston visiting my brother and hope to maybe attend some of the talks. Funding for this is coming partially from the “Templeton Charity Foundation Switzerland”. I guess this is these people, some off-shoot of the Templeton Foundation, with exactly the same interests, They say they have made $85.2 million in grants, a list is here.

Update
: I was thinking of commenting that Templeton at least seemed to have slowed down its efforts to promote multiverse mania. But then I noticed this. If you want to know why Ira Flatow on NPR keeps bringing up the multiverse, $150,000 in Templeton money might have something to do with it…

Update: I keep on finding out about more of these Templeton-funded things, they are endless. Templeton is funding an Institute for Cross-Disciplinary Engagement at Dartmouth. Themes to be investigated are “Can science alone explain the nature of reality?”, “Is there free will?” and “Is there purpose in the universe?”. Among their many activities will be an event featuring a dialogue between Sean Carroll and a Buddhist Scholar in San Francisco in February.

31 Aug 18:38

No thanks

by Minnesotastan
NEW YORK (Reuters) - A judge on Thursday rejected Citigroup Inc's bid for a preliminary injunction to stop AT&T Inc from using the phrase "AT&T thanks" on a customer loyalty program, which the bank called too similar to its trademarked "thankyou."

U.S. District Judge Katherine Forrest in Manhattan said Citigroup has not shown that customers would likely be confused, or that it would suffer irreparable harm, if AT&T kept saying "AT&T thanks" while the bank's lawsuit continued.

She also said AT&T provided solid evidence that forcing it to start saying something other than "AT&T thanks" would cause an "expensive and significant disruption."

Citigroup had no immediate comment. AT&T said in a statement it was pleased with the decision, and maintained that "the law does not allow one company to own the word 'thanks.'"

The fourth-largest U.S. bank by assets sued AT&T on June 9, one week after the Dallas-based phone company launched "AT&T thanks" in a dispute that threatened to damage a co-branding relationship dating to 1998.

Citigroup said AT&T went too far, having known it would object after the New York-based bank had since 2004 extensively used "thankyou" on its own customer loyalty and reward programs.
Offered without comment.
27 Aug 02:26

Baltimore Police Deploy Surveillance Tech Designed for the Iraq War

by Eric Boehm

If you've visited Baltimore at any point during 2016, there's a good chance your every movement was tracked by the city's newest high-tech surveillance program.

The city is filthy with police surveillance cameras, but until this week little was known about the Baltimore Police Department's latest and most controversial tool for watching residents and visitors: a small aircraft that circles the city on an almost continuous basis, recording the movements of cars, people and everything else.

That aerial surveillance technology is the subject of a fascinating feature story published by Bloomberg Businessweek on Tuesday. Among the most startling details in the piece: the same program now being used in Baltimore—and being pitched to other police departments in other major cities—was originally designed to catch insurgents planting roadside bombs during the occupation of Iraq.

This is how the War on Terror comes home to roost.

More than six months after the covert eye-in-the-sky program was launched, the city is still barely admitting that it exists. The Baltimore Police Department never asked the public for permission—or, heck, even told them it was happening.

The project is run by an Ohio-based company called Persistent Surveillance Systems and relies on a "sophisticated array of cameras" attached to the belly of a small Cessna aircraft, according to Bloomberg's Monte Reel. The cameras can capture an area of roughly 30 square miles (about one-third the size of Baltimore) at any given time and continuously transmit real-time images to a group of analysts on the ground. All the footage is saved on hard drives for an unknown length of time.

Police have used the cameras to track suspected criminals and investigate a wide variety of crimes—"from property thefts to shootings," they claim in the article—but the secret cameras have also been used to keep tabs on peaceful protestors, like those who stood outside the city's courthouse on June 23 when one of the police officers accused of killing Freddie Grey was acquitted on all charges.

"This is a big deal," says Jay Stanley, a senior policy analyst for the ACLU. "It continues to be stunning that American police forces feel that they can use deeply radical and controversial surveillance systems, which raise the most profound questions about our society and its values, without telling the public that will be subject to these technologies—the public they are supposed to be serving."

Even now that the program has been revealed by Bloomberg, the city still won't admit that it's happening. The police department refused to comment for the Bloomberg story and did not return calls from Reason.

Before Baltimore, Persistent Surveillance quietly conducted a nine-day trial with the Los Angeles County Sheriff's Department in 2012, flying surveillance planes over Compton. The people being watched—including Compton Mayor Aja Brown—weren't told about it until a year later, and they were rightfully outraged when they found out.

Residents of Baltimore should be equally outraged that this has happened without their consent or knowledge, though perhaps the city's long history of intrusive surveillance and abusive policing has numbed the response—being observed from the sky certainly beats getting kicked in the chest (literally), right?

Ross McNutt, founder of Persistent Surveillance, tells Bloomberg that he believes the aerial surveillance can help police departments reduce crime by as much as 20 percent, though he also admits that he has no actual data to support that claim. The usage of the planes in Baltimore seems like a trial run before the company tries to market the same program to other cities—McNutt says he's already approached police departments in 20 different jurisdictions.

This is actually the second time in less than a year that residents of Baltimore have found out they were subjected to secret aerial surveillance. In 2015, it was revealed that mysterious planes had circled the city for hours at a time during the violent protests that erupted across the city after Freddie Grey's death in police custody. The FBI later admitted that those planes were part of a fleet of surveillance planes used by federal law enforcement, but the AP reported that the FBI's fleet was not equipped for "bulk collection activities."

At the time, the ALCU noted the existence of Persistent Solutions and wondered if it was that company's technology at work. It wasn't, but only a few months later, the Baltimore Police Department allowed Persistent Surveillance to start making similar flights—without telling anyone it was happening or asking permission from city officials and the general public.

As the Bloomberg story makes clear, these flights are very much about bulk collection. Videos are saved on "massive hard drives" and can be accessed at a later date.

There would seem to be some major constitutional issues with police surveillance on such a massive scale, but decades of court rulings have given wide deference to cops' ability to observe and track anyone and everything—with human eyes or cameras, it doesn't seem to matter.

If cities can put surveillance cameras on every street corner, if police can use license plate scanners to identify and track vehicles even when they are merely sitting in their owners' driveways and if cops can fly over a fenced-in backyard to see if someone might be growing marijuana—well, at that point what Persistent Surveillance is doing might be a difference of degree but not of kind. McNutt's company is just taking the big picture, literally.

Still, this is the first time police have had the ability to surveil an entire city in such a broad way.

In the Bloomberg piece (and in a presentation McNutt gave to the ACLU, in an attempt to gain their approval for his technology—or at least to head-off a lawsuit from them), Persistent Surveillance argues that their technology is actually less intrusive than many forms of street-level surveillance because the resolution is too low to identify individual people. Rather, people are merely "pixelated dots."

Stanley says that's a hallow argument, since surveillance systems never stand in isolation. Those "pixelated dots can be followed forward and backward in time as they move around the city. Used in conjunction with street-level surveillance, these aerial observations have the potential to track almost anyone in the city.

"Baltimore, like some other cities, has a lot of ground based cameras, but those cameras do not cover every square inch of the city, and their feeds are not stitched together with an artificial intelligence agent that is capable of using them in a coordinated fashion to follow individuals around anywhere within a 30-square-mile area," Stanley says.

Despite claims by McNutt that this surveillance technology will only be used to help police solve "major crimes," there are already signs that mission creep is happening. Persistent Surveillance has used their cameras to track Black Lives Matter protestors who were not accused of any crimes after Baltimore police expressed concern about possible "disruptions."

Indeed, the whole project is an example of mission creep. McNutt's company was originally contracted by the U.S. military to help find and catch insurgents making roadside bombs in Iraq, but now the technology is being deployed on the home front—just like so many other leftovers from a decade-plus of war in the Middle East.

24 Aug 00:44

The Only Way To Get A Good Resolution

by noreply@blogger.com (Atrios)
The company that makes EpiPens has decided to gouge the hell out of people. There's nothing magical about the drugs. They're cheap and off patent. It's the delivery device. No one's having any luck getting a quality competitor past the FDA.

Will this crisis be solved? It probably depends on how many members of Congress have family members who need them.




Because that's how things work. Oh, and the CEO of the company that makes them just happens to be the daughter of another US senator. Because that's how things work, too.