Shared posts

27 Apr 19:45

ADT Sues Amazon's Ring Over Use of Blue Octagon Logo

by msmash
ADT, a home security company in the United States with over 6 million customers, is suing Amazon's Ring, alleging that the DIY home security company is copying ADT's logo and profiting from customer trust associated with it. From a report: ADT has asked a federal judge in Florida to order Ring to stop using its blue, octagonal signs and to pay unspecified compensation to the security company. In the complaint, ADT said it asked Ring to stop copying its blue octagon logo in 2016, after which the Amazon-owned company removed the blue color from its sign, but kept the octagon shape. In late March, upon releasing a new outdoor siren, Ring added the blue back to its advertising materials. ADT also said in the complaint that it owns 12 trademarks for the shape, color and look of its blue, octagonal sign.

Read more of this story at Slashdot.

29 May 14:58

Facebook Announces Messenger Security Features that Don't Compromise Privacy

by Bruce Schneier

Note that this is "announced," so we don't know when it's actually going to be implemented.

Facebook today announced new features for Messenger that will alert you when messages appear to come from financial scammers or potential child abusers, displaying warnings in the Messenger app that provide tips and suggest you block the offenders. The feature, which Facebook started rolling out on Android in March and is now bringing to iOS, uses machine learning analysis of communications across Facebook Messenger's billion-plus users to identify shady behaviors. But crucially, Facebook says that the detection will occur only based on metadata­ -- not analysis of the content of messages­ -- so that it doesn't undermine the end-to-end encryption that Messenger offers in its Secret Conversations feature. Facebook has said it will eventually roll out that end-to-end encryption to all Messenger chats by default.

That default Messenger encryption will take years to implement.

More:

Facebook hasn't revealed many details about how its machine-learning abuse detection tricks will work. But a Facebook spokesperson tells WIRED the detection mechanisms are based on metadata alone: who is talking to whom, when they send messages, with what frequency, and other attributes of the relevant accounts -- essentially everything other than the content of communications, which Facebook's servers can't access when those messages are encrypted. "We can get pretty good signals that we can develop through machine learning models, which will obviously improve over time," a Facebook spokesperson told WIRED in a phone call. They declined to share more details in part because the company says it doesn't want to inadvertently help bad actors circumvent its safeguards.

The company's blog post offers the example of an adult sending messages or friend requests to a large number of minors as one case where its behavioral detection mechanisms can spot a likely abuser. In other cases, Facebook says, it will weigh a lack of connections between two people's social graphs -- a sign that they don't know each other -- or consider previous instances where users reported or blocked a someone as a clue that they're up to something shady.

One screenshot from Facebook, for instance, shows an alert that asks if a message recipient knows a potential scammer. If they say no, the alert suggests blocking the sender, and offers tips about never sending money to a stranger. In another example, the app detects that someone is using a name and profile photo to impersonate the recipient's friend. An alert then shows the impersonator's and real friend's profiles side-by-side, suggesting that the user block the fraudster.

Details from Facebook

01 Jun 14:20

I Made Winslow Cuter

Hannelore has become even more powerful

This is a bonus comic I drew a couple months ago for my Patreon supporters! Thank you, Patreon supporters. Gonna run some guest comics next week while I get back to work after all the travel I've been doing.

25 Apr 20:08

Faking Domain Names with Unicode Characters

by Bruce Schneier

It's things like this that make phishing attacks easier.

News article.

16 Nov 04:32

Experts Say Internet 'Mega' Attacks Are on the Rise

by msmash
An anonymous reader shares a Fortune report:The phenomenon of hackers knocking websites offline with massive floods of Internet traffic is nothing new. But the pattern of these so-called DDoS attacks (for "distributed denial of service") is changing, according to a new report from internet provider Akamai. The report, published on Tuesday, suggests the overall number of DDoS attacks has not risen significantly in 2016, but that the force of these attacks is increasing. Akamai says it confronted 19 "mega attacks" in the third quarter of this year, including the two biggest it has ever encountered in history. "It's interesting that while the overall number of attacks fell by 8% quarter over quarter, the number of large attacks, as well as the size of the biggest attacks, grew significantly," said the report.

Share on Google+

Read more of this story at Slashdot.

26 Oct 16:05

Big Low and Big Waves Off of the Northwest Coast

by noreply@blogger.com (Cliff Mass)
A huge, deep low pressure system is right off our coast, as shown by infrared satellite image on Monday evening. You can see the cloud bands spiraling into the low center....the more turns, the deeper the low.  To paraphrase Donald Trump--its HUGE.


The UW WRF forecast for 8 PM Monday night shows the pressure distribution at 8 PM.  An extremely large system with an intense pressure gradient, and thus strong winds, over the eastern Pacific.  Fortunately, we are just outside of the action.

The forecast wind gusts are substantial with peak gusts over 70 mph, with strong winds extending to just offshore of the Washington/Oregon Coast.  The lowest wind speed?  In the center of the storm!


Big waves can be generated by three things:  strong winds, long duration, and long fetch (length over which the winds interact with the ocean).  With this strong, large, and slow-moving  system, these requirements were available in spades.

Here is the forecast wave heights for Tuesday evening from the NOAA/NWS WaveWatch III system. Up to 10 m (32 ft) waves off the WA coast!  If you were planning a cruise offshore, you might pick another activity.

_____________________

I-732 is perhaps the most important ballot measure in decades, please support it.

 I-732, the revenue-neutral carbon tax swap, will help reduce Washington State's greenhouse gas emissions, make our  tax system less regressive, and potentially serve as a potent bipartisan model for the rest of the nation.  More information here.   

Some opponents of I-732 are spreading false information, suggesting that I-732 is not revenue neutral.   This claim can be easily disproven as discussed here.  

I strongly support I-732 as do many UW climate scientists.  We have an unprecedented opportunity to lead the nation in reducing carbon emissions and to establish a model that could spread around the country.
And, if you are interested in learning more about the potential of nuclear power to help with global warming, you might want to attend the next "Climate on Tap"  gathering on Nov 1.  More information here: https://www.facebook.com/events/353541914987425/


08 Oct 06:03

Hatch The Plan

06 Jan 01:59

Ancient Egyptian Brewer's Tomb Found

by samzenpus
Rambo Tribble writes "Reminding us of beer's pivotal role in the civilization of humankind, the BBC comments on the discovery of an Ancient Egyptian tomb, belonging to the distinguished 'head of beer production' in the Pharaoh's court. From the article: 'Experts say the tomb's wall paintings are well preserved and depict daily life as well as religious rituals. Antiquities Minister Mohamed Ibrahim told the Egyptian al-Ahram newspaper that security had been tightened around the tomb until excavation works are complete.'"

Share on Google+

Read more of this story at Slashdot.








02 Jan 16:55

To Hell With 2013: A Pep Talk for the New Year

good riddance to 2013 and a pep talk for a better 2014

To Hell With 2013: A Pep Talk for the New Year

Follow updates on Facebook, Twitter, Google+, or Tumblr

This comic is dedicated to everyone who had terrible 2013s. Also I just wanted to draw a giant war tortoise crushing the skull of an anthropomorphized year.

25 Dec 06:07

Happy Holidays 2013

12 Dec 03:19

Switzerland Wants To Become the World's Data Vault

by samzenpus
wiredmikey writes "Business for Switzerland's 55 data centers is booming. They benefit from the Swiss reputation for security and stability, and some predict the nation already famous for its super-safe banks will soon also be known as the world's data vault. For example, housed in one of Switzerland's numerous deserted Cold War-era army barracks, one high-tech data center is hidden behind four-ton steel doors built to withstand a nuclear attack — plus biometric scanners and an armed guard. Such tight security is in growing demand in a world shaking from repeated leaks scandals and fears of spies lurking behind every byte."

Share on Google+

Read more of this story at Slashdot.








18 Sep 18:06

Yochai Benkler on the NSA

by Bruce Schneier

Excellent essay:

We have learned that in pursuit of its bureaucratic mission to obtain signals intelligence in a pervasively networked world, the NSA has mounted a systematic campaign against the foundations of American power: constitutional checks and balances, technological leadership, and market entrepreneurship. The NSA scandal is no longer about privacy, or a particular violation of constitutional or legislative obligations. The American body politic is suffering a severe case of auto-immune disease: our defense system is attacking other critical systems of our body.
10 Sep 21:39

Three ways CFAR has changed my view of rationality

Submitted by Julia_Galef • 99 votes • 58 comments

The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases.

But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.)

 

1. We think less in terms of epistemic versus instrumental rationality.

Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful.

Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.)

In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce. 

These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries like, "How can we be more metacognitive?"

 

2. We think more in terms of a modular mind.

The human mind isn't one coordinated, unified agent, but rather a collection of different processes that often aren't working in sync, or even aware of what each other is up to. Less Wrong certainly knows this; see, for example, discussions of anticipations versus professions, aliefs, and metawanting. But in general we gloss over that fact, because it's so much simpler and more natural to talk about "what I believe" or "what I want," even if technically there is no single "I" doing the believing or wanting. And for many purposes that kind of approximation is fine. 

But a rationality-for-humans usually can't rely on that shorthand. Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind. So a large part of applied rationality turns out to be about noticing those contradictions and trying to achieve coherence, in some fashion, before you can even begin to update on evidence or plan an action.

Many of the techniques we're developing at CFAR fall roughly into the template of coordinating between your two systems of cognition: implicit-reasoning System 1 and explicit-reasoning System 2. For example, knowing when each system is more likely to be reliable. Or knowing how to get System 2 to convince System 1 of something ("We're not going to die if we go talk to that stranger"). Or knowing what kinds of questions System 2 should ask of System 1 to find out why it's uneasy about the conclusion at which System 2 has arrived.

This is all, of course, with the disclaimer that the anthropomorphizing of the systems of cognition, and imagining them talking to each other, is merely a useful metaphor. Even the classification of human cognition into Systems 1 and 2 is probably not strictly true, but it's true enough to be useful. And other metaphors prove useful as well – for example, some difficulties with what feels like akrasia become more tractable when you model your future selves as different entities, as we do in the current version of our "Delegating to yourself" class.

 

3. We're more focused on emotions.

There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.

It used to frustrate me when people would say something that revealed they held a Straw Vulcan-esque belief that "rationalist = emotionless robot". But now when I encounter that misconception, it just makes me want to smile, because I'm thinking to myself: "If you had any idea how much time we spend at CFAR talking about our feelings…"

Being able to put yourself into particular emotional states seems to make a lot of pieces of rationality easier. For example, for most of us, it's instrumentally rational to explore a wider set of possible actions – different ways of studying, holding conversations, trying to be happy, and so on – beyond whatever our defaults happen to be. And for most of us, inertia and aversions get in the way of that exploration. But getting yourself into "playful" mode (one of the hypothesized primary emotional circuits common across mammals) can make it easier to branch out into a wider swath of Possible-Action Space. Similarly, being able to call up a feeling of curiosity or of "seeking" (another candidate for a primary emotional circuit) can help you conquer motivated cognition and learned blankness.  

And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity. Being attuned to the signs of sympathetic nervous system activation – that you're tensing up, or that your heart rate is increasing – means you get cues to double-check your reasoning, or to coax yourself into another emotional state.

We also use emotions as sources of data. You can learn to tap into feelings of surprise or confusion to get a sense of how probable you implicitly expect some event to be. Or practice simulating hypotheticals ("What if I knew that my novel would never sell well?") and observing your resultant emotions, to get a clearer picture of your utility function. 

And emotions-as-data can be a valuable check on your System 2's conclusions. One of our standard classes is "Goal Factoring," which entails finding some alternate set of actions through which you can purchase the goods you want more cheaply. So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.

Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step, and to the extent that aspiring rationalists sometimes forget it, I suppose that's a Steel-Manned Straw Vulcan (Steel Vulcan?) that actually is worth worrying about.

Conclusion

I'll name one more trait that unites, rather than divides, CFAR and Less Wrong. We both diverge from "traditional" rationality in that we're concerned with determining which general methods systematically perform well, rather than defending some set of methods as "rational" on a priori criteria alone. So CFAR's picture of what rationality looks like, and how to become more rational, will and should change over the coming years as we learn more about the effects of our rationality training efforts. 

58 comments
03 Sep 16:02

Our Newfound Fear of Risk

by Bruce Schneier

We're afraid of risk. It's a normal part of life, but we're increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren't free. They cost money, of course, but they cost other things as well. They often don't provide the security they advertise, and -- paradoxically -- they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

Three examples:

  1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it's not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes -- a wrong address on a warrant, a misunderstanding -- result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.

  2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.

  3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade -- including the wars in Iraq and Afghanistan -- money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

There's a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world -- among the wealthier of us who live in peaceful Western countries -- our lives have become safer.

Our notions of risk are not absolute; they're based more on how far they are from whatever we think of as "normal." So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We're bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are -- and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn't approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn't discipline -- no matter how benign the infraction -- the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There's a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they're arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved -- medicine is the big one, but other sciences as well -- we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn't able to figure out how to topple structures constructed under some new and safer building code, and an automobile won't invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you're safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you'd expect -- and you also get more side effects, because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

This essay previously appeared on Forbes.com.

27 Aug 21:47

Just Thinking About Science Triggers Moral Behavior

by Soulskill
ananyo writes "The association between science and morality is so ingrained that merely thinking about it can trigger more moral behavior, according to a study by researchers at the University of California Santa Barbara. The researchers hypothesized that there is a deep-seated perception of science as a moral pursuit — its emphasis on truth-seeking, impartiality and rationality privileges collective well-being above all else. The researchers conducted four separate studies to test this. In the first, participants read a vignette of a date-rape and were asked to rate the 'wrongness' of the offense before answering a questionnaire measuring their belief in science. Those reporting greater belief in science condemned the act more harshly. In the other three, participants primed with science-related words were more altruistic."

Share on Google+

Read more of this story at Slashdot.








15 Aug 16:35

Drawing Down: How To Roll Back Police Militarization In America

by Radley Balko
When the FBI finally located Whitey Bulger in 2010 after searching for 16 years, the reputed mobster was suspected of involvement in 19 murders in the 1970s and '80s, and was thought to be armed with a massive arsenal of weapons. He was also 81 at the time, in poor physical health, and looking at spending the rest of his life in prison. Of all the people who might meet the criteria for arrest by a SWAT team, one might think that Bulger would top the list.

Yet instead of sending in a tactical team to tear down Bulger’s door in the middle of the night, the FBI took a different appraoch. After some investigating, FBI officials cut the lock on a storage locker Bulger used in the apartment complex where he was staying. They then had the property manager call Bulger to tell him someone may have broken into his locker. When Bulger went to investigate, he was arrested without incident. There was no battering ram, there were no flash grenades, there was no midnight assault on his home.

That peaceful apprehension of a known violent fugitive, found guilty this week of participating in 11 murders and a raft of other crimes, stands in stark contrast to the way tens of thousands of Americans are confronted each year by SWAT teams battering down their doors to serve warrants for nonviolent crimes, mostly involving drugs.

On the night of Jan. 5, 2011, for example, police in Framingham, Mass., raided a Fountain Street apartment that was home to Eurie Stamps and his wife, Norma Bushfan-Stamps. An undercover officer had allegedly purchased drugs from Norma's 20-year-old son, Joseph Bushfan, and another man, Dwayne Barrett, earlier that evening, and now the police wanted to arrest them. They took a battering ram to the door, set off a flash grenade, and forced their way inside.

As the SWAT team moved through the apartment, screaming at everyone to get on the floor, Officer Paul Duncan approached Eurie Stamps. The 68-year-old, not suspected of any crime, was watching a basketball game in his pajamas when the police came in.

By the time Duncan got to him in a hallway, Stamps was face-down on the floor with his arms over his head, as police had instructed him. As Duncan moved to pull Stamps' arms behind him, he says he fell backwards, somehow causing his gun to discharge, shooting Stamps. The grandfather of 12 was killed in his own home, while complying with police orders during a raid for crimes in which he had no involvement.

The Obama administration has begun talking about reforming the criminal justice system, notably this week, when Attorney General Eric Holder announced changes to how federal prosecutors will consider mandatory minimum sentences. If government leaders are looking for another issue to tackle, they might consider the astonishing evolution of America’s police forces over the last 30 years.

Today in America, SWAT teams are deployed about 100 to 150 times per day, or about 50,000 times per year -- a dramatic increase from the 3,000 or so annual deployments in the early 1980s, or the few hundred in the 1970s. The vast majority of today's deployments are to serve search warrants for drug crimes. But the use of SWAT tactics to enforce regulatory law also appears to be rising. This month, for example, a SWAT team raided the Garden of Eden, a sustainable growth farm in Arlington, Texas, supposedly to look for marijuana. The police found no pot, however, and the real intent of the raid appears to have been for code enforcement, as the officers came armed with an inspection notice for nuisance abatement.

Where these teams were once used only in emergency situations, they're used today mostly as an investigative tool against people merely suspected of crimes. In many police agencies, paramilitary tactics have become the first option, where they once were the last.

“It’s really about a lack of imagination and a lack of creativity,” says Norm Stamper, a retired cop who served as police chief of Seattle from 1994 to 2000. “When your answer to every problem is more force, it shows that you haven’t been taught and trained to consider other options."

Why can’t drug suspects be arrested the way Bulger was -- with as little violence and confrontation as possible? One big reason is a lack of resources. Many police agencies serve several drug warrants per week. Some serve several per day. They simply don't have the time or personnel to come up with a Bulger-like plan for each one. It's quicker and easier for the police to use overwhelming force.

"There are just too many of these cases," says Joe Key, a longtime cop who served in the Baltimore police department from 1971 to 1995 and started the department's SWAT team.

Key adds that another reason police don't want to set up a perimeter and allow drug suspects to surrender peacefully is that it would give them an opportunity to destroy evidence. That, of course, means that, perversely, genuinely violent suspects are treated less harshly than people suspected of nonviolent crimes.

"Someone might say that's an indication that we need to reconsider these drug laws," Key says. "But that's a whole different argument."

Add to all of this a Pentagon program that gives surplus military equipment to local police agencies, a Department of Homeland Security program that cuts checks to police departments to buy yet more military gear, and federal grants specifically tied to drug policing and asset forfeiture policies, both of which reward police officials who send their SWAT teams on drug raids, and it isn't difficult to see how we reached the point where SWAT teams are deployed so frequently.

The question is, how could the U.S. roll all of this back? I interviewed numerous former police chiefs, police officers and federal officials, all of whom were concerned about the militarization of America's police forces. Here are some of their suggestions for reform:

End The Drug War

Holder’s announcement this week at least acknowledges the drug war’s role in mass incarceration. But the damage inflicted by the country’s 40-year drug fight goes well beyond prisons. It’s also been the driving force behind America’s mass police militarization since at least the early 1980s, and the best way to rein in the trend would be to simply end prohibition altogether.

Complete legalization is, of course, never going to happen. But even something short of legalization, like decriminalization, would take away many of the incentives to fight the drug war as if it were an actual war. The federal government could also leave it to the states to determine drug policy, and with what priority and level of force it should be enforced.

Your average small town SWAT team would probably continue to exist, at least in the short term. But these teams are expensive to maintain, and without federal funding, it seems likely that many would eventually disband.

End Anti-Drug Byrne Grants

Just ending the federal incentives for mass police militarization would help. The Edward Byrne Memorial Justice Assistance Grant Program, for example, distributes grants to agencies for a variety of criminal justice programs, many of them positive. But the grants can also go to police departments solely for drug policing. Even more destructive are grants that create multi-jurisdictional drug task forces, basically roving squads of narcotics cops that serve multiple jurisdictions, and often lack any real accountability. Even back in 2000, former FBI Director William Webster told NBC News that the federal government had become “too enamored with SWAT teams, draining money away from conventional law enforcement.”

End The High Intensity Drug Trafficking Areas Program

The federal HIDTA program is another inducement for more aggressive enforcement of drug laws. Once a police department reaches a threshold of drug arrests, the agency becomes eligible for yet more federal funding as a region with high illicit drug activity. This then becomes an incentive for police departments to desire the high drug trafficking label, which means they'll devote even more resources to drug policing, which means more raids.

End The "Equitable Sharing" Civil Asset Forfeiture Program

Under civil asset forfeiture, police agencies can seize any piece of property -- cash, cars, homes -- that they can reasonably connect to criminal activity. In most places, the proceeds of the seizure go to the police department. Since civil asset forfeiture is used overwhelmingly in drug investigations, this has created a strong incentive for police to send their SWAT teams to serve routine drug warrants.

In some states, however, lawmakers have recognized the perverse incentives at play, and have attempted to get rid of them by requiring any forfeiture proceeds to go to a state general fund, or toward public schools. Under the federal government's equitable sharing program, however, a local police agency can merely call up a federal law enforcement agency like the Drug Enforcement Administration to request assistance in an investigation. The entire operation is then governed by federal law. The DEA takes 10 percent to 20 percent of the seized assets, then gives the rest back to the local police agency. The effect is to restore the perverse incentives, and to thwart the will of state legislatures.

End The 1033 Program

The so-called 1033 program, passed in 1997, formalized a Pentagon policy of giving away surplus military equipment to domestic police agencies, which had been going on since the Reagan years. The new law also set up a well-funded, well-staffed office to facilitate the donations. Millions of pieces of equipment have since been given away -- $500 million worth in 2011 alone. Once they get the gear -- tanks, armored personnel vehicles, guns, helicopters, bayonets, you name it -- police agencies in tiny towns have used it to start SWAT teams. Even seemingly innocuous items like camouflage uniforms can reinforce a militaristic culture and mindset. One longtime cop (whose father was also a longtime cop and former police chief) wrote to me in an email, "One of the problems we both saw in the early 90's were departments leaving the formal police uniforms with leather belts and holsters in favor of the dark blue fatigues with nylon mesh belts and holsters. This put police in a more fighting posture."

Some law enforcement officials have been warning of the problem for years. One former Washington, D.C., police sergeant wrote in a 1997 letter to The Washington Post, "One tends to throw caution to the wind when wearing ‘commando-chic’ regalia, a bulletproof vest with the word ‘POLICE’ emblazoned on both sides, and when one is armed with high tech weaponry ... We have not yet seen a situation like [the British police occupation of] Belfast. But some police chiefs are determined to move in that direction." A Connecticut police chief told The New York Times in 2000 that switching to military-like garb "feeds a mindset that you’re not a police officer serving a community, you’re a soldier at war. I had some tough-guy cops in my department pushing for bigger and more hardware."

It isn't difficult to see how giving cops the weapons, uniforms, and vehicles of war might encourage them to take a more warlike approach to their jobs. That won't end simply by shuttering the 1033 program. But it would certainly be a start.

Reform Department Of Homeland Security Grants

Since Sept. 11, 2001, the federal government has handed out some $34 billion in grants to police departments across the country, many for the purchase of battle-grade vehicles and weapons. This program has created a cottage industry of companies to take DHS checks in exchange for guns, tanks and armored vehicles. In effect, it has given rise to a police industrial complex.

There's also little oversight. DHS can't even produce a comprehensive list of police departments that have received grants and how they've used them. Though ostensibly for anti-terrorism efforts, the grants are going to places like Fargo, N.D., where they're inevitably used for routine policing. (Or in the case of Fargo, an armored personnel vehicle with rotating turret has been used mostly for "appearances at the annual city picnic, where it’s been parked near the children’s bounce house.")

The federal government has a legitimate interest in protecting the country from terrorist attacks. So at least in theory, anti-terror grants to domestic police agencies make sense. But it seems unlikely that the grants in their current form are doing much to prevent terrorism.

End Federal Medical Marijuana Raids

In the late 1990s, the Clinton administration set a dangerous precedent when it began sending federal SWAT teams to raid medical marijuana businesses in states that had legalized the drug for that purpose. By then, the use of SWAT teams to serve drug warrants was common, and the explosion in the number and use of SWAT-like teams had already happened. But until then, police agencies at least made the claim that the use of such force was in response to a genuine threat -- that drug dealers were heavily armed, dangerous, and had no qualms about killing police.

The medical marijuana raids couldn't be justified that way. These were licensed businesses, operating openly and in compliance with state laws. The show of force wasn't about officer safety or community safety. It was about sending a message. These people were openly flouting federal law, and they were to be made into examples. This isn't the sort of government action we commonly associate with free societies. And of course, these raids have continued ever since.

Return SWAT To Its Original Purpose

Legislatures or city councils could pass laws restricting the use of SWAT teams to those rare emergencies in which there’s an imminent threat to public safety. They could limit the use of no-knock raids or even forced entry to serve warrants only on people suspected of violent crimes. One policy might be to allow the deployment of a tactical team only when police have good reason to believe that a violent crime is in the process of being committed, or is likely to be committed imminently without police intervention.

Key, who started the Baltimore SWAT team, suggests a broader rule, but one that would still impose limitations. "I think you could limit SWAT teams and the dynamic entry tactics to those cases where police can obtain a no-knock warrant," he says. "The courts impose more restrictions for no-knock warrants. You have to show evidence that a suspect may attempt to arm himself and attack police, or may destroy evidence if there's an announcement."

At the very least, lawmakers should demand an end to SWAT mission creep. It's beyond comprehension that such violent tactics would be used to enforce regulatory law. SWAT teams also shouldn’t be raiding poker games, bars where police suspect there's underage drinking or the offices of doctors suspected of over-prescribing painkillers. They shouldn't be performing license inspections on barbershops, or swarming Amish farms suspected of selling unpasteurized cheese. Like the medical marijuana raids, these sorts of raids are straight-up abuse -- for the sake of sending a political message.

Mandate Transparency

In 2008, the Maryland legislature passed a bill requiring all police agencies in the state to issue twice-yearly reports on how often they use their SWAT teams, for what purpose, what the searches found, and whether any shots were fired. It's a simple bill that puts no restrictions on the use of SWAT teams, yet was opposed by every police agency in the state.

Other states could pass similar laws. And they could go further. Police departments could track warrants from the time they’re obtained to the time they’re executed, in a database that’s accessible to civilian review boards, defense attorneys, judges, and, in some cases, the media (acknowledging that the identities of confidential informants need not be revealed). Botched and bungled raids should be documented. These include warrants served on the wrong address, warrants based on bad tips from informants and warrants that resulted in the death or injury of an officer, suspect or bystander.

Police departments should also keep running tabs of how many warrants are executed with no-knock entry versus knock-and-announce entry, how many required a forced entry, how many required the deployment of a SWAT team or other paramilitary unit, and how many used diversionary devices like flash grenades. They should also make records of what these raids turned up. If these tactics are going to be used against the public, the public at the very least deserves to know how often they’re used, why they’re used, how often things go wrong, and what sort of results the tactics are getting.

It's clear that there has been a huge increase in the number of SWAT teams and the frequency of their use. But we can't have a real debate about police militarization without better data on its pervasiveness.

There are other policies that would make police departments more transparent. The remarkable advances in and democratization of smartphone technology have enabled a large and growing number of citizens to record the actions of on-duty police officers. Rather than fighting the trend, police officials and policymakers ought to embrace it. Legislatures could pass laws that clearly establish a citizen’s right to record on-duty cops, and provide an enforcement mechanism so that citizens wrongly and illegally arrested for doing so have a course of action. As many police officials have pointed out, such policies not only expose police misconduct, leading to improvements, but can also provide exonerating evidence in cases where police officers have been wrongly accused.

All forced-entry police raids could be recorded in a tamper-proof format, and the videos made available to the public through a simple open records request. This could be done efficiently and inexpensively. Even better, it wouldn't be difficult to equip the officers participating in a raid with cameras mounted on their helmets, jackets, or guns. Not only would recording all raids help clear up disputes about how long police waited after knocking, whether police knocked at all, or who fired first, but the knowledge that every raid would be recorded would also encourage best practices among the SWAT teams. Additionally, recordings of raids would provide an accurate portrayal of how drug laws are actually enforced. It’s likely that many Americans aren’t fully aware how violent these tactics can be. Perhaps many would still support tactical raids for drug warrants even after being exposed to videos of drug raids. But if the drug war is being waged to protect the public, the public should be able to see exactly how the war is being waged.

Local police departments that receive federal funding should also be required to keep records on and report incidents of officer shootings and use of excessive force to an independent federal agency such as the National Institute for Justice or the Office of the Inspector General. Those that don't comply should lose federal funding. Currently, while all police agencies are required to keep such data, that requirement isn't enforced.

We also need easy-to-find, publicly accessible records of judges and search warrants (and where applicable, prosecutors). The public deserves to know if all the narcotics cops in a given area are going to the same judge or magistrate with their narcotics warrants, or if a given judge hasn’t declined a single warrant in, say, 20 years. As more courts use computer software to process warrants, it will get easier to compile this sort of information and make it available to the public.

Change Police Culture

All of these policies have infused too many police agencies with a culture of militarism. Neill Franklin is a former narcotics cop in Maryland, who also oversaw training at the state's police academies in the early 2000s. “I think there are two critical components to policing that cops today have forgotten," he says. "Number one, you’ve signed on to a dangerous job. That means that you’ve agreed to a certain amount of risk. You don’t get to start stepping on others’ rights to minimize that risk you agreed to take on. And number two, your first priority is not to protect yourself, it’s to protect those you’ve sworn to protect. But I don’t know how you get police officers today to value those principles again. The ‘us and everybody else’ sentiment is strong today. It’s very, very difficult to change a culture.”

But there are some practical policy changes that may work. Police today are given too little training in counseling and dispute resolution, and what little training they do get in the academy is quickly blotted out by what they learn on the street in the first few months on the job. When you’re given abundant training in the use of force, but little in using psychology, body language, and other non-coercive means of resolving a conflict, you’ll naturally gravitate toward using force. “I think about the notion of command presence,” Stamper, the former Seattle police chief, says. “When you as a police officer show up at a chaotic or threatening or dangerous situation, you need to demonstrate your command presence -- that you are the person in command of this situation. You do this with your bearing, your body language, and your voice. What I see today is that this well-disciplined notion of command presence has been shattered. Cops today think you show command presence by yelling and screaming. In my day, if you screamed, if you went to a screaming, out-of-control presence, you had failed in that situation as a cop. You’d be pulled aside by a senior cop or sergeant and made to understand in no uncertain terms that you were out of line. The very best cops I ever worked around were quiet. Which isn’t to say they were withdrawn or passive, but they were quiet. They understood the value of silence, the powerful effect of a pause."

Stamper adds that these things aren’t emphasized anymore. “Verbal persuasion is the first tool a police officer has. The more effective he or she is as a communicator, the less likely it is he or she is going to get impulsive -- or need to.”

Franklin suggests that deteriorating physical fitness at some police departments may also lead to unnecessary escalations of force -- another argument in favor of foot patrols over car patrols. “When I was commander of training in Baltimore, one of the first things I did was evaluate the physical condition of the police officers themselves,” Franklin says. “The overweight guys were the guys who knew very little about arrest control and defensive strategy. Being a police officer is a physically demanding job. You can’t be so out of shape. When you are, you’re less confident about less lethal force. It can get so that the only use of force you’re capable of using is a firearm. You also fear physical confrontation, so you’re more likely to reach for your firearm earlier. Getting cops in shape is a confidence builder, and it gets people away from relying too much on the weapons they have on their belt.”

Police should also be required to learn and understand the effect that power can have on their own psyche. They should be taught the Stanford prison experiment, the Milgram experiment and similar studies. Having complete power over another person can be immensely corrupting. But simply being aware of its corrosive effects is an important step toward guarding against them.

Police departments and policymakers should also embrace real community policing. That means taking cops out of patrol cars to walk beats and become a part of the communities they serve. It means ditching statistics-driven policing, which encourages the sorts of petty arrests of low-level offenders and use of informants that foment anger and distrust. Community policing makes cops part of the neighborhoods they serve, gives them a stake in those neighborhoods, and can be the antidote to the antagonistic us-versus-them relationship too many cops have with the citizens on their beats. (And the mentality usually goes both ways.)

More generally, politicians be should called out and held accountable when they use war rhetoric to discuss crime and illicit drugs. Words and language from policymakers have an impact on the way police officers approach their jobs, and the way they view the people with whom they interact while on patrol. If we want to dissuade them from seeing their fellow citizens as the enemy, political leaders -- the people who set the policies and appropriate the budgets for those officers -- need to stop referring to them that way.

Ultimately, we're unlikely to see any real efforts to reform or rollback police militarization until politicians are convinced there is a problem and pay a political price for not addressing it. Today, domestic police officers drive tanks and armored personnel carriers on American streets, break into homes and kill pets over pot, and batter down doors to raid poker games. They’re now subjecting homes and businesses to commando raids for white-collar and even regulatory offenses. And while there has recently been some action from state legislators, there’s been barely any opposition or concern from anyone in Congress, any governor, or any mayor of a sizable city.

Until that happens, expect more tanks, more and bigger guns, more Robocob responses to protest, and more, increasingly violent raids for increasingly less serious crimes and infractions.

Radley Balko is a senior writer and investigative reporter for The Huffington Post. This essay is adapted from his new book, Rise of the Warrior Cop: The Militarization of America's Police Forces.
26 Jul 17:44

Gains from trade: Slug versus Galaxy - how much would I give up to control you?

Submitted by Stuart_Armstrong • 33 votes • 67 comments

Edit: Moved to main at ThrustVectoring's suggestion.

A suggestion as to how to split the gains from trade in some situations.

The problem of Power

A year or so ago, people in the FHI embarked on a grand project: to try and find out if there was a single way of resolving negotiations, or a single way of merging competing moral theories. This project made a lot of progress in finding out how hard this was, but very little in terms of solving it. It seemed evident that the correct solution was to weigh the different utility functions, and then for everyone maximise the weighted sum, but all ways of weighting had their problems (the weighting with the most good properties was a very silly one: use the "min-max" weighting that sets your maximal attainable utility to 1 and your minimal to 0).

One thing that we didn't get close to addressing is the concept of power. If two partners in the negotiation have very different levels of power, then abstractly comparing their utilities seems the wrong solution (more to the point: it wouldn't be accepted by the powerful party).

The New Republic spans the Galaxy, with Jedi knights, battle fleets, armies, general coolness, and the manufacturing and human resources of countless systems at its command. The dull slug, ARthUrpHilIpDenu, moves very slowly around a plant, and possibly owns one leaf (or not - he can't produce the paperwork). Both these entities have preferences, but if they meet up, and their utilities are normalised abstractly, then ARthUrpHilIpDenu's preferences will weigh in far too much: a sizeable fraction of the galaxy's production will go towards satisfying the slug. Even if you think this is "fair", consider that the New Republic is the merging of countless individual preferences, so it doesn't make any sense that the two utilities get weighted equally.

The default point

After looking at various blackmail situations, it seems to me that it's the concept of default, or status quo, that most clearly differentiates between a threat and an offer. I wouldn't want you to make a credible threat, because this worsens the status quo, I would want you to make a credible offer, because this improves it. How this default is established is another matter - there may be some super-UDT approach that solves it from first principles. Maybe there is some deep way of distinguishing between threats and promises in some other way, and the default is simply the point between them.

In any case, without going any further into it's meaning or derivation, I'm going to assume that the problem we're working on has a definitive default/disagreement/threat point. I'll use the default point terminology, as that is closer to the concept I'm considering.

Simple trade problems often have a very clear default point. These are my goods, those are your goods, the default is we go home with what we started with. This is what I could build, that's what you could build, the default is that we both build purely for ourselves.

If we imagine ARthUrpHilIpDenu and the New Republic were at opposite ends of a regulated wormhole, and they could only trade in safe and simple goods, then we've got a pretty clear default point.

Having a default point opens up a whole host of new bargaining equilibriums, such as the Nash Bargaining Solution (NBS) and the Kalai-Smorodinsky Bargaining Solution (KSBS). But neither of these are really quite what we'd want: the KSBKS is all about fairness (which generally reduced expected outcomes), while the NBS uses a product of utility values, something that makes no intrinsic sense at all (NBS has some nice properties, like independence of irrelevant alternatives, but this only matters if the default point is reached through a process that has the same properties - and it can't be).

What am I really offering you in trade?

When two agents meet, especially if they are likely to meet more in the future (and most especially if they don't know the number of times and the circumstances in which they will meet), they should merge their utility functions: fix a common scale for their utility functions, add them together, and then both proceed to maximise the sum.

This explains what's really being offered in a trade. Not a few widgets or stars, but the possibility of copying your utility function into mine. But why would you want that? Because that will change my decisions, into a direction you find more pleasing. So what I'm actually offering you, is access to my decision points.

What is actually on offer in a trade, is access by one player's utility function to the other player's decision points.

This gives a novel way of normalising utility functions. How much, precisely, is access to my decision points worth to you? If the default point gives a natural zero, then complete control over the other player's decision points is a natural one. "Power" is a nebulous concept, and different players may disagree as to how much power they each have. But power can only be articulated through making decisions (if you can't change any of your decisions, you have no power), and this normalisation allows each player to specify exactly how much they value the power/decision points of the other. Outcomes that involve one player controlling the other player's decision points can be designated the "utopia" point for that first player. These are what would happen if everything went exactly according to what they wanted.

What does this mean for ARthUrpHilIpDenu and the New Republic? Well, the New Republic stands to gain a leaf (maybe). From it's perspective, the difference between default (all the resources of the galaxy and no leaf) and utopia (all the resources of the galaxy plus one leaf) is tiny. And yet that tiny difference will get normalised to one: the New Republic's utility function will get multiplied by a huge amount. It will weigh heavily in any sum.

What about ARthUrpHilIpDenu? It stands to gain the resources of a galaxy. The difference between default (a leaf) and utopia (all the resources of a galaxy dedicated to making leaves) is unimaginably humongous. And yet that huge difference will get normalised to one: the ARthUrpHilIpDenu's utility function will get divided by a huge amount. It will weigh very little in any sum.

Thus if we add the two normalised utility functions, we get one that is nearly totally dominated by the New Republic. Which is what we'd expect, given the power differential between the two. So this bargaining system reflects the relative power of the players. Another way of thinking of this is that each player's utility is normalised taking into account how much they would give up to control the other. I'm calling it the "Mutual Worth Bargaining Solution" (MWBS), as it's the worth to players of the other player's decision points that are key. Also because I couldn't think of a better title.

Properties of the Mutual Worth Bargaining Solution

How does the MWBS compare with the NBS and the KSBS? The NBS is quite different, because it has no concept of relative power, normalising purely by the players' preferences. Indeed, one player could have no control at all, no decision points, and the NBS would still be unchanged.

The KSBS is more similar to the MWBS: the utopia points of the KSBS are the same as those of the MWBS. If we set the default point to (0,0) and the utopia points to (1,-) and (-,1), then the KSBS is given by the highest h such that (h,h) is a possible outcome. Whereas the MWBS is given by the outcome (x,y) such that x+y is highest possible.

Which is preferable? Obviously, if they knew exactly what the outcomes and utilities were on offer, then each player would have preferences as to which system to use (the one that gives them more). But if they didn't, if they had uncertainties as to what players and what preferences they would face in the future, then MWBS generally comes out on top (in expectation).

How so? Well, if a player doesn't know what other players they'll meet, they don't know in what way their decision points will be relevant to the other, and vice versa. They don't know what pieces of their utility will be relevant to the other, and vice versa. So they can expect to face a wide variety of normalised situations. To a first approximation, it isn't too bad an idea to assume that one is equally likely to face a certain situation as it's symmetric complement. Using the KSBS, you'd expect to get a utility of h (same in both case); under the MWBS, a utility of (x+y)/2 (x in one case, y in the other). Since x+y ≥ h+h = 2h by the definition of the MWBS, it comes out ahead in expectation.

Another important distinction between the MWBS is that while the KSBS and the NBS only allow Pareto improvements from the default point, MWBS does allow for some situation where one player will lose from the deal. It is possible, for instance, that (1/2,-1/4) is a possible outcome (summed utility 1/4), and there are no better options possible. Doesn't this go against the spirit of the default point? Why would someone go into a deal that leaves them poorer than before?

First off all, that situation will be rare. All MWBS must be in the triangle bounded by x<1, y<1 and x+y>0. The first bounds are definitional: one cannot get more expected utility that one's utopia point. The last bound comes from the fact that the default point is itself an option, with summed utility 0+0=0, so all summed utilities must be above zero. Sprinkle a few random outcome points into that triangle, and it very likely that the one with highest summed utility will be a Pareto improvement over (0,0).

But the other reason to accept the risk of losing, is because of the opportunity of gain. One could modify the MWBS to only allow Pareto improvements over the default: but in expectation, this would perform worse. The player would be immune from losing 1/4 utility from (1/2,-1/4), but unable to gain 1/2 from the (-1/4,1/2): the argument is the same as above. In ignorance as to the other player's preferences, accepting the possibility of loss improves the expected outcome.

It should be noted that the maximum that a player could theoretically lose by using the MWBS is equal to the maximum they could theoretically win. So the New Republic could lose at most a leaf, meaning that even powerful players would not be reluctant to trade. For less powerful players, the potential losses are higher, but so are the potential rewards.

Directions of research

The MWBS is somewhat underdeveloped, and the explanation here isn't as clear as I'd have liked. However, me and Miriam are about to have a baby, so I'm not expecting to have any time at all soon, so I'm pushing out the idea, unpolished.

Some possible routes for further research: what are the other properties of MWBS? Are they properties that make MWBS feel more or less likely or acceptable? The NBS is equivalent with certain properties: what are the properties that are necessary and sufficient for the MWBS (and can they suggest better Bargaining Solutions)? Can we replace the default point? Maybe we can get a zero by imagining what would happen if the second player's decision nodes were under the control of an anti-agent (an agent that's the opposite of the first player), or a randomly selected agent?

The most important research route is to analyse what happens if several players come together at different times, and repeatedly normalise their utilities using the MWBS: does it matter the order in which they meet? I strongly feel that it's exploring this avenue that will reach "the ultimate" bargaining solution, if such a thing is to be found. Some solution that is stable under large numbers of agents, who don't know each other or how many they are, coming together in a order they can't predict.

67 comments
26 Jul 17:30

The Robots, AI, and Unemployment Anti-FAQ

Submitted by Eliezer_Yudkowsky • 42 votes • 258 comments

Q.  Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A.  Conventional economic theory says this shouldn't happen.  Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns.  If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns.  On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

Q.  Sounds like a lovely theory.  As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact.  Experiment trumps theory and in reality, unemployment is rising.

A.  Sure.  Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries).  We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away.  The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution.  The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries.  Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should.

Q.  But now people aren't being reemployed.  The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.

A.  Yes.  And that's a new problem.  We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence.  The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

Baxter robot

Q.  Maybe we've finally reached the point where there's no work left to be done, or where all the jobs that people can easily be retrained into can be even more easily automated.

A.  You talked about jobs going away in the Great Recession and then not coming back.  Well, the Great Recession wasn't produced by a sudden increase in productivity, it was produced by... I don't want to use fancy terms like "aggregate demand shock" so let's just call it problems in the financial system.  The point is, in previous recessions the jobs came back strongly once NGDP rose again.  (Nominal Gross Domestic Product - roughly the total amount of money being spent in face-value dollars.)  Now there's been a recession and the jobs aren't coming back (in the US and EU), even though NGDP has risen back to its previous level (at least in the US).  If the problem is automation, and we didn't experience any sudden leap in automation in 2008, then why can't people get back at least the jobs they used to have, as they did in previous recessions?  Something has gone wrong with the engine of reemployment.

Q.  And you don't think that what's gone wrong with the engine of reemployment is that it's easier to automate the lost jobs than to hire someone new?

A.  No.  That's something you could say just as easily about the 'lost' jobs from hand-weaving when mechanical looms came along.  Some new obstacle is preventing jobs lost in the 2008 recession from coming back.  Which may indeed mean that jobs eliminated by automation are also not coming back.  And new high school and college graduates entering the labor market, likewise usually a good thing for an economy, will just end up being sad and unemployed.   But this must mean something new and awful is happening to the processes of employment - it's not because the kind of automation that's happening today is different from automation in the 1990s, 1980s, 1920s, or 1870s; there were skilled jobs lost then, too.  It should also be noted that automation has been a comparatively small force this decade next to shifts in global trade - which have also been going on for centuries and have also previously been a hugely positive economic force.  But if something is generally wrong with reemployment, then it might be possible for increased trade with China to result in permanently lost jobs within the US, in direct contrast to the way it's worked over all previous economic history.  But just like new college graduates ending up unemployed, something else must be going very wrong - that wasn't going wrong in 1960 - for anything so unusual to happen!

Q.  What if what's changed is that we're out of new jobs to create?  What if we've already got enough hot dog buns, for every kind of hot dog bun there is in the labor market, and now AI is automating away the last jobs and the last of the demand for labor?

A.  This does not square with our being unable to recover the jobs that existed before the Great Recession.  Or with lots of the world living in poverty.  If we imagine the situation being much more extreme than it actually is, there was a time when professionals usually had personal cooks and maids - as Agatha Christie said, "When I was young I never expected to be so poor that I could not afford a servant, or so rich that I could afford a motor car." 

  Many people would hire personal cooks or maids if we could afford them, which is the sort of new service that ought to come into existence if other jobs were eliminated - the reason maids became less common is that they were offered better jobs, not because demand for that form of human labor stopped existing.  Or to be less extreme, there are lots of businesses who'd take nearly-free employees at various occupations, if those employees could be hired literally at minimum wage and legal liability wasn't an issue.  Right now we haven't run out of want or use for human labor, so how could "The End of Demand" be producing unemployment right now?  The fundamental fact that's driven employment over the course of previous human history is that it is a very strange state of affairs for somebody sitting around doing nothing, to have nothing better to do.  We do not literally have nothing better for unemployed workers to do.  Our civilization is not that advanced.  So we must be doing something wrong (which we weren't doing wrong in 1950).

Q.  So what is wrong with "reemployment", then?

A.  I know less about macroeconomics than I know about AI, but even I can see all sorts of changed circumstances which are much more plausible sources of novel employment dysfunction than the relatively steady progress of automation.  In terms of developed countries that seem to be doing okay on reemployment, Australia hasn't had any drops in employment and their monetary policy has kept nominal GDP growth on a much steadier keel - using their central bank to regularize the number of face-value Australian dollars being spent - which an increasing number of influential econbloggers think the US and even more so the EU have been getting catastrophically wrong.  Though that's a long story.[1]  Germany saw unemployment drop from 11% to 5% from 2006-2012 after implementing a series of labor market reforms, though there were other things going on during that time.  (Germany has twice the number of robots per capita as the US, which probably isn't significant to their larger macroeconomic trends, but would be a strange fact if robots were the leading cause of unemployment.)  Labor markets and monetary policy are both major, obvious, widely-discussed candidates for what could've changed between now and the 1950s that might make reemployment harder.  And though I'm not a leading econblogger, some other obvious-seeming thoughts that occur to me are:

* Many industries that would otherwise be accessible to relatively less skilled labor, have much higher barriers to entry now than in 1950.  Taxi medallions, governments saving us from the terror of unlicensed haircuts, fees and regulatory burdens associated with new businesses - all things that could've plausibly changed between now and the previous four centuries.  This doesn't apply only to unskilled labor, either; in 1900 it was a lot easier, legally speaking, to set up shop as a doctor.  (Yes, the average doctor was substantially worse back then.  But ask yourself whether some simple, repetitive medical surgery should really, truly require 11 years of medical school and residency, rather than a 2-year vocational training program for someone with high dexterity and good focus.)  These sorts of barriers to entry allow people who are currently employed in that field to extract value from people trying to get jobs in that field (and from the general population too, of course).  In any one sector this wouldn't hurt the whole economy too much, but if it happens everywhere at once, that could be the problem.

* True effective marginal tax rates on low-income families have gone up today compared to the 1960s, after all phasing-out benefits are taken into account, counting federal and state taxes, city sales taxes, and so on.  I've seen figures tossed around like 70% and worse, and this seems like the sort of thing that could easily trash reemployment.[2]

* Perhaps companies are, for some reason, less willing to hire previously unskilled people and train them on the job.  Empirically this seems to be something that is more true today than in the 1950s.  If I were to guess at why, I would say that employees moving more from job to job, and fewer life-long jobs, makes it less rewarding for employers to invest in training an employee; and also college is more universal now than then.  Which means that employers might try to rely on colleges to train employees, and this is a function colleges can't actually handle because:

* The US educational system is either getting worse at training people to handle new jobs, or getting so much more expensive that people can't afford retraining, for various other reasons.  (Plus, we are really stunningly stupid about matching educational supply to labor demand.  How completely ridiculous is it to ask high school students to decide what they want to do with the rest of their lives and give them nearly no support in doing so?  Support like, say, spending a day apiece watching twenty different jobs and then another week at their top three choices, with salary charts and projections and probabilities of graduating that subject given their test scores?  The more so considering this is a central allocation question for the entire economy?  But I have no particular reason to believe this part has gotten worse since 1960.)

* The financial system is staring much more at the inside of its eyelids now than in the 1980s.  This could be making it harder for expanding businesses to get loans at terms they would find acceptable, or making it harder for expanding businesses to access capital markets at acceptable terms, or interfering with central banks' attempts to regularize nominal demand, or acting as a brake on the system in some other fashion.

* Hiring a new employee now exposes an employer to more downside risk of being sued, or risk of being unable to fire the new employee if it turns out to be a bad decision.  Human beings, including employers, are very averse to downside risk, so this could plausibly be a major obstacle to reemployment.  Such risks are a plausible major factor in making the decision to hire someone hedonically unpleasant for the person who has to make that decision, which could've changed between now and 1950.  (If your sympathies are with employees rather than employers, please consider that, nonetheless, if you pass any protective measure that makes the decision to hire somebody less pleasant for the hirer, fewer people will be hired and this is not good for people seeking employment.  Many labor market regulations transfer wealth or job security to the already-employed at the expense of the unemployed, and these have been increasing over time.)

* Tyler Cowen's Zero Marginal Product Workers hypothesis:  Anyone long-term-unemployed has now been swept into a group of people who have less than zero average marginal productivity, due to some of the people in this pool being negative-marginal-product workers who will destroy value, and employers not being able to tell the difference.  We need some new factor to explain why this wasn't true in 1950, and obvious candidates would be (1) legal liability making past-employer references unreliable and (2) expanded use of college credentialing sweeping up more of the positive-product workers so that the average product of the uncredentialed workers drops.

* There's a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying.  If you can build a feature-app and flip it to Google for $20M in an acqui-hire, why bother trying to invent the next Model T?  Maybe working on hard technology problems using math and science until you can build a liquid fluoride thorium reactor, has been made to seem less attractive to brilliant young kids than flipping a $20M company to Google or becoming a hedge-fund trader (and this is truer today relative to 1950).[3]

* Closely related to the above:  Maybe change in atoms instead of bits has been regulated out of existence.  The expected biotech revolution never happened because the FDA is just too much of a roadblock (it adds a great deal of expense, significant risk, and most of all, delays the returns beyond venture capital time horizons).  It's plausible we'll never see a city with a high-speed all-robotic all-electric car fleet because the government, after lobbying from various industries, will require human attendants on every car - for safety reasons, of course!  If cars were invented nowadays, the horse-and-saddle industry would surely try to arrange for them to be regulated out of existence, or sued out of existence, or limited to the same speed as horses to ensure existing buggies remained safe.  Patents are also an increasing drag on innovation in its most fragile stages, and may shortly bring an end to the remaining life in software startups as well.  (But note that this thesis, like the one above, seems hard-pressed to account for jobs not coming back after the Great Recession.  It is not conventional macroeconomics that re-employment after a recession requires macro sector shifts or new kinds of technology jobs.   The above is more of a Great Stagnation thesis of "What happened to productivity growth?" than a Great Recession thesis of "Why aren't the jobs coming back?"[4])

Q.  Some of those ideas sounded more plausible than others, I have to say.

A.  Well, it's not like they could all be true simultaneously.  There's only a fixed effect size of unemployment to be explained, so the more likely it is that any one of these factors played a big role, the less we need to suppose that all the other factors were important; and perhaps what's Really Going On is something else entirely.  Furthermore, the 'real cause' isn't always the factor you want to fix.  If the European Union's unemployment problems were 'originally caused' by labor market regulation, there's no rule saying that those problems couldn't be mostly fixed by instituting an NGDP level targeting regime.  This might or might not work, but the point is that there's no law saying that to fix a problem you have to fix its original historical cause.

Q.  Regardless, if the engine of re-employment is broken for whatever reason, then AI really is killing jobs - a marginal job automated away by advances in AI algorithms won't come back.

A.  Then it's odd to see so many news articles talking about AI killing jobs, when plain old non-AI computer programming and the Internet have affected many more jobs than that.  The buyer ordering books over the Internet, the spreadsheet replacing the accountant - these processes are not strongly relying on the sort of algorithms that we would usually call 'AI' or 'machine learning' or 'robotics'.  The main role I can think of for actual AI algorithms being involved, is in computer vision enabling more automation.  And many manufacturing jobs were already automated by robotic arms even before robotic vision came along.  Most computer programming is not AI programming, and most automation is not AI-driven.  And then on near-term scales, like changes over the last five years, trade shifts and financial shocks and new labor market entrants are more powerful economic forces than the slow continuing march of computer programming.  (Automation is a weak economic force in any given year, but cumulative and directional over decades.  Trade shifts and financial shocks are stronger forces in any single year, but might go in the opposite direction the next decade.  Thus, even generalized automation via computer programming is still an unlikely culprit for any sudden drop in employment as occurred in the Great Recession.)

Q.  Okay, you've persuaded me that it's ridiculous to point to AI while talking about modern-day unemployment.  What about future unemployment?

A.  Like after the next ten years?  We might or might not see robot-driven cars, which would be genuinely based in improved AI algorithms, and would automate away another bite of human labor.  Even then, the total number of people driving cars for money would just be a small part of the total global economy; most humans are not paid to drive cars most of the time.  Also again: for AI or productivity growth or increased trade or immigration or graduating students to increase unemployment, instead of resulting in more hot dogs and buns for everyone, you must be doing something terribly wrong that you weren't doing wrong in 1950.

Q.  How about timescales longer than ten years?  There was one class of laborers permanently unemployed by the automobile revolution, namely horses.  There are a lot fewer horses nowadays because there is literally nothing left for horses to do that machines can't do better; horses' marginal labor productivity dropped below their cost of living.  Could that happen to humans too, if AI advanced far enough that it could do all the labor?

A.  If we imagine that in future decades machine intelligence is slowly going past the equivalent of IQ 70, 80, 90, eating up more and more jobs along the way... then I defer to Robin Hanson's analysis in Economic Growth Given Machine Intelligence, in which, as the abstract says, "Machines complement human labor when [humans] become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do."

Q.  Could we already be in this substitution regime -

A.  No, no, a dozen times no, for the dozen reasons already mentioned.  That sentence in Hanson's paper has nothing to do with what is going on right now.  The future cannot be a cause of the past.  Future scenarios, even if they seem to associate the concept of AI with the concept of unemployment, cannot rationally increase the probability that current AI is responsible for current unemployment.

Q.  But AI will inevitably become a problem later?

A.  Not necessarily.  We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply.  That scenario isn't the only possibility.

Q.  What other possibilities are there?

A.  Lots, since what Hanson is talking about is a new unprecedented phenomenon extrapolated over new future circumstances which have never been seen before and there are all kinds of things which could potentially go differently within that.  Hanson's paper may be the first obvious extrapolation from conventional macroeconomics and steady AI trendlines, but that's hardly a sure bet.  Accurate prediction is hard, especially about the future, and I'm pretty sure Hanson would agree with that.

Q.  I see.  Yeah, when you put it that way, there are other possibilities.  Like, Ray Kurzweil would predict that brain-computer interfaces would let humans keep up with computers, and then we wouldn't get mass unemployment.

A.  The future would be more uncertain than that, even granting Kurzweil's hypotheses - it's not as simple as picking one futurist and assuming that their favorite assumptions correspond to their favorite outcome.  You might get mass unemployment anyway if humans with brain-computer interfaces are more expensive or less effective than pure automated systems.  With today's technology we could design robotic rigs to amplify a horse's muscle power - maybe, we're still working on that tech for humans - but it took around an extra century after the Model T to get to that point, and a plain old car is much cheaper.

Q.  Bah, anyone can nod wisely and say "Uncertain, the future is."  Stick your neck out, Yoda, and state your opinion clearly enough that you can later be proven wrong.  Do you think we will eventually get to the point where AI produces mass unemployment?

A.  My own guess is a moderately strong 'No', but for reasons that would sound like a complete subject change relative to all the macroeconomic phenomena we've been discussing so far.  In particular I refer you to "Intelligence Explosion Microeconomics: Returns on cognitive reinvestment", a paper recently referenced on Scott Sumner's blog as relevant to this issue.

Q.  Hold on, let me read the abstract and... what the heck is this?

A.  It's an argument that you don't get the Hansonian scenario or the Kurzweilian scenario, because if you look at the historical course of hominid evolution and try to assess the inputs of marginally increased cumulative evolutionary selection pressure versus the cognitive outputs of hominid brains, and infer the corresponding curve of returns, then ask about a reinvestment scenario -

Q.  English.

A.  Arguably, what you get is I. J. Good's scenario where once an AI goes over some threshold of sufficient intelligence, it can self-improve and increase in intelligence far past the human level.  This scenario is formally termed an 'intelligence explosion', informally 'hard takeoff' or 'AI-go-FOOM'.  The resulting predictions are strongly distinct from traditional economic models of accelerating technological growth (we're not talking about Moore's Law here).  Since it should take advanced general AI to automate away most or all humanly possible labor, my guess is that AI will intelligence-explode to superhuman intelligence before there's time for moderately-advanced AIs to crowd humans out of the global economy.  (See also section 3.10 of the aforementioned paper.)  Widespread economic adoption of a technology comes with a delay factor that wouldn't slow down an AI rewriting its own source code.  This means we don't see the scenario of human programmers gradually improving broad AI technology past the 90, 100, 110-IQ threshold.  An explosion of AI self-improvement utterly derails that scenario, and sends us onto a completely different track which confronts us with wholly dissimilar questions.

Q.  Okay.  What effect do you think a superhumanly intelligent self-improving AI would have on unemployment, especially the bottom 25% who are already struggling now?  Should we really be trying to create this technological wonder of self-improving AI, if the end result is to make the world's poor even poorer?  How is someone with a high-school education supposed to compete with a machine superintelligence for jobs?

A.  I think you're asking an overly narrow question there.

Q.  How so?

A.  You might be thinking about 'intelligence' in terms of the contrast between a human college professor and a human janitor, rather than the contrast between a human and a chimpanzee.  Human intelligence more or less created the entire modern world, including our invention of money; twenty thousand years ago we were just running around with bow and arrows.  And yet on a biological level, human intelligence has stayed roughly the same since the invention of agriculture.  Going past human-level intelligence is change on a scale much larger than the Industrial Revolution, or even the Agricultural Revolution, which both took place at a constant level of intelligence; human nature didn't change.  As Vinge observed, building something smarter than you implies a future that is fundamentally different in a way that you wouldn't get from better medicine or interplanetary travel.

Q.  But what does happen to people who were already economically disadvantaged, who don't have investments in the stock market and who aren't sharing in the profits of the corporations that own these superintelligences?

A.  Um... we appear to be using substantially different background assumptions.  The notion of a 'superintelligence' is not that it sits around in Goldman Sachs's basement trading stocks for its corporate masters.  The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and then, rather than bothering with the digital counters that humans call money, the superintelligence solves the protein structure prediction problem, emails some DNA sequences to online peptide synthesis labs, and gets back a batch of proteins which it can mix together to create an acoustically controlled equivalent of an artificial ribosome which it can use to make second-stage nanotechnology which manufactures third-stage nanotechnology which manufactures diamondoid molecular nanotechnology and then... well, it doesn't really matter from our perspective what comes after that, because from a human perspective any technology more advanced than molecular nanotech is just overkill.  A superintelligence with molecular nanotech does not wait for you to buy things from it in order for it to acquire money.  It just moves atoms around into whatever molecular structures or large-scale structures it wants.

Q.  How would it get the energy to move those atoms, if not by buying electricity from existing power plants?  Solar power?

A.  Indeed, one popular speculation is that optimal use of a star system's resources is to disassemble local gas giants (Jupiter in our case) for the raw materials to build a Dyson Sphere, an enclosure that captures all of a star's energy output.  This does not involve buying solar panels from human manufacturers, rather it involves self-replicating machinery which builds copies of itself on a rapid exponential curve -

Q.  Yeah, I think I'm starting to get a picture of your background assumptions.  So let me expand the question.  If we grant that scenario rather than the Hansonian scenario or the Kurzweilian scenario, what sort of effect does that have on humans?

A.  That depends on the exact initial design of the first AI which undergoes an intelligence explosion.  Imagine a vast space containing all possible mind designs.  Now imagine that humans, who all have a brain with a cerebellum, thalamus, a cerebral cortex organized into roughly the same areas, neurons firing at a top speed of 200 spikes per second, and so on, are one tiny little dot within this space of all possible minds.  Different kinds of AIs can be vastly more different from each other than you are different from a chimpanzee.  What happens after AI, depends on what kind of AI you build - the exact selected point in mind design space.  If you can solve the technical problems and wisdom problems associated with building an AI that is nice to humans, or nice to sentient beings in general, then we all live happily ever afterward.  If you build the AI incorrectly... well, the AI is unlikely to end up with a specific hate for humans.  But such an AI won't attach a positive value to us either.  "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else."  The human species would end up disassembled for spare atoms, after which human unemployment would be zero.  In neither alternative do we end up with poverty-stricken unemployed humans hanging around being sad because they can't get jobs as janitors now that star-striding nanotech-wielding superintelligences are taking all the janitorial jobs.  And so I conclude that advanced AI causing mass human unemployment is, all things considered, unlikely.

Q.  Some of the background assumptions you used to arrive at that conclusion strike me as requiring additional support beyond the arguments you listed here.

A.  I recommend Intelligence Explosion: Evidence and Import for an overview of the general issues and literature, Artificial Intelligence as a positive and negative factor in global risk for a summary of some of the issues around building AI correctly or incorrectly, and the aforementioned Intelligence Explosion Microeconomics for some ideas about analyzing the scenario of an AI investing cognitive labor in improving its own cognition.  The last in particular is an important open problem in economics if you're a smart young economist reading this, although since the fate of the entire human species could well depend on the answer, you would be foolish to expect there'd be as many papers published about that as squirrel migration patterns.  Nonetheless, bright young economists who want to say something important about AI should consider analyzing the microeconomics of returns on cognitive (re)investments, rather than post-AI macroeconomics which may not actually exist depending on the answer to the first question.  Oh, and Nick Bostrom at the Oxford Future of Humanity Institute is supposed to have a forthcoming book on the intelligence explosion; that book isn't out yet so I can't link to it, but Bostrom personally and FHI generally have published some excellent academic papers already.

Q.  But to sum up, you think that AI is definitely not the issue we should be talking about with respect to unemployment.

A.  Right.  From an economic perspective, AI is a completely odd place to focus your concern about modern-day unemployment.  From an AI perspective, modern-day unemployment trends are a moderately odd reason to be worried about AI.  Still, it is scarily true that increased automation, like increased global trade or new graduates or anything else that ought properly to produce a stream of employable labor to the benefit of all, might perversely operate to increase unemployment if the broken reemployment engine is not fixed.

Q.  And with respect to future AI... what is it you think, exactly?

A.  I think that with respect to moderately more advanced AI, we probably won't see intrinsic unavoidable mass unemployment in the economic world as we know it.  If re-employment stays broken and new college graduates continue to have trouble finding jobs, then there are plausible stories where future AI advances far enough (but not too far) to be a significant part of what's freeing up new employable labor which bizarrely cannot be employed.  I wouldn't consider this my main-line, average-case guess; I wouldn't expect to see it in the next 15 years or as the result of just robotic cars; and if it did happen, I wouldn't call AI the 'problem' while central banks still hadn't adopted NGDP level targeting.  And then with respect to very advanced AI, the sort that might be produced by AI self-improving and going FOOM, asking about the effect of machine superintelligence on the conventional human labor market is like asking how US-Chinese trade patterns would be affected by the Moon crashing into the Earth.  There would indeed be effects, but you'd be missing the point.

Q.  Thanks for clearing that up.

A.  No problem.


ADDED 8/30/13:  Tyler Cowen's reply to this was one I hadn't listed:

Think of the machines of the industrial revolution as getting underway sometime in the 1770s or 1780s.  The big wage gains for British workers don’t really come until the 1840s.  Depending on your exact starting point, that is over fifty years of labor market problems from automation.

See here for the rest of Tyler's reply.

Taken at face value this might suggest that if we wait 50 years everything will be all right.  Kevin Drum replies that in 50 years there might be no human jobs left, which is possible but wouldn't be an effect we've seen already, rather a prediction of novel things yet to come.

Though Tyler also says, "A second point is that now we have a much more extensive network of government benefits and also regulations which increase the fixed cost of hiring labor" and this of course was already on my list of things that could be trashing modern reemployment unlike-in-the-1840s.

'Brett' in MR's comments section also counter-claims:

The spread of steam-powered machinery and industrialization from textiles/mining/steel to all manner of British industries didn’t really get going until the 1830s and 1840s. Before that, it was mostly piece-meal, with some areas picking up the technology faster than others, while the overall economy didn’t change that drastically (hence the minimal changes in overall wages).


[1]  The core idea in market monetarism is very roughly something like this:  A central bank can control the total amount of money and thereby control any single economic variable measured in money, i.e., control one nominal variable.  A central bank can't directly control how many people are employed, because that's a real variable.  You could, however, try to control Nominal Gross Domestic Income (NGDI) or the total amount that people have available to spend (as measured in your currency).  If the central bank commits to an NGDI level target then any shortfalls are made up the next year - if your NGDI growth target is 5% and you only get 4% in one year then you try for 6% the year after that.  NGDI level targeting would mean that all the companies would know that, collectively, all the customers in the country would have 5% more money (measured in dollars) to spend in the next year than the previous year.  This is usually called "NGDP level targeting" for historical reasons (NGDP is the other side of the equation, what the earned dollars are being spent on) but the most advanced modern form of the idea is probably "Level-targeting a market forecast of per-capita NGDI".  Why this is the best nominal variable for central banks to control is a longer story and for that you'll have to read up on market monetarism.  I will note that if you were worried about hyperinflation back when the Federal Reserve started dropping US interest rates to almost zero and buying government bonds by printing money... well, you really should note that (a) most economists said this wouldn't happen, (b) the market spreads on inflation-protected Treasuries said that the market was anticipating very low inflation, and that (c) we then actually got inflation below the Fed's 2% target.  You can argue with economists.  You can even argue with the market forecast, though in this case you ought to bet money on your beliefs.  But when your fears of hyperinflation are disagreed with by economists, the market forecast and observed reality, it's time to give up on the theory that generated the false prediction.  In this case, market monetarists would have told you not to expect hyperinflation because NGDP/NGDI was collapsing and this constituted (overly) tight money regardless of what interest rates or the monetary base looked like.

[2]  Call me a wacky utopian idealist, but I wonder if it might be genuinely politically feasible to reduce marginal taxes on the bottom 20%, if economists on both sides of the usual political divide got together behind the idea that income taxes (including payroll taxes) on the bottom 20% are (a) immoral and (b) do economic harm far out of proportion to government revenue generated.  This would also require some amount of decreased taxes on the next quintile in order to avoid high marginal tax rates, i.e., if you suddenly start paying $2000/year in taxes as soon as your income goes from $19,000/year to $20,000/year then that was a 200% tax rate on that particular extra $1000 earned.  The lost tax revenue must be made up somewhere else.  In the current political environment this probably requires higher income taxes on higher wealth brackets rather than anything more creative.  But if we allow ourselves to discuss economic dreamworlds, then income taxes, corporate income taxes, and capital-gains taxes are all very inefficient compared to consumption taxes, land taxes, and basically anything but income and corporate taxes.  This is true even from the perspective of equality; a rich person who earns lots of money, but invests it all instead of spending it, is benefiting the economy rather than themselves and should not be taxed until they try to spend the money on a yacht, at which point you charge a consumption tax or luxury tax (even if that yacht is listed as a business expense, which should make no difference; consumption is not more moral when done by businesses instead of individuals).  If I were given unlimited powers to try to fix the unemployment thing, I'd be reforming the entire tax code from scratch to present the minimum possible obstacles to exchanging one's labor for money, and as a second priority minimize obstacles to compound reinvestment of wealth.  But trying to change anything on this scale is probably not politically feasible relative to a simpler, more understandable crusade to "Stop taxing the bottom 20%, it harms our economy because they're customers of all those other companies and it's immoral because they get a raw enough deal already."

[3]  Two possible forces for significant technological change in the 21st century would be robotic cars and electric cars.  Imagine a city with an all-robotic all-electric car fleet, dispatching light cars with only the battery sizes needed for the journey, traveling at much higher speeds with no crash risk and much lower fuel costs... and lowering rents by greatly extending the effective area of a city, i.e., extending the physical distance you can live from the center of the action while still getting to work on time because your average speed is 75mph.  What comes to mind when you think of robotic cars?  Google's prototype robotic cars.  What comes to mind when you think of electric cars?  Tesla.  In both cases we're talking about ascended, post-exit Silicon Valley moguls trying to create industrial progress out of the goodness of their hearts, using money they earned from Internet startups.  Can you sustain a whole economy based on what Elon Musk and Larry Page decide are cool?

[4]   Currently the conversation among economists is more like "Why has total factor productivity growth slowed down in developed countries?" than "Is productivity growing so fast due to automation that we'll run out of jobs?"  Ask them the latter question and they will, with justice, give you very strange looks.  Productivity isn't growing at high rates, and if it were that ought to cause employment rather than unemployment.  This is why the Great Stagnation in productivity is one possible explanatory factor in unemployment, albeit (as mentioned) not a very good explanation for why we can't get back the jobs lost in the Great Recession.  The idea would have to be that some natural rate of productivity growth and sectoral shift is necessary for re-employment to happen after recessions, and we've lost that natural rate; but so far as I know this is not conventional macroeconomics.

258 comments
04 Jul 22:41

This Is Mirror's Edge In Real Life. It Is Terrifying

by Luke Plunkett

I can’t tell if these guys are imitating Mirror’s Edge, or if this video is just a testament to how well the game captured what parkour feels like. Either way, this first person Parkour video is so much like Mirror’s Edge that it’s actually a little bit terrifying.

There’s even moments that feel like it is straight out the game — like the first moment you slide down a rooftop in the game, or hurdle a fence — this real life video seems to imitate the animations in the game, or is it the other way around.

I can’t decide. Either way, this video needs more red, otherwise the poor bastards won’t know what direction to run.

You have to watch this video. It’s the greatest thing I’ve seen all day. Easily.

Read more...

    


21 Jun 16:23

The Japanese Response to Terrorism

by Bruce Schneier

Lessons from Japan's response to Aum Shinrikyo:

Yet what's as remarkable as Aum's potential for mayhem is how little of it, on balance, they actually caused. Don't misunderstand me: Aum's crimes were horrific, not merely the terrible subway gassing but their long history of murder, intimidation, extortion, fraud, and exploitation. What they did was unforgivable, and the human cost, devastating. But at no point did Aum Shinrikyo represent an existential threat to Japan or its people. The death toll of Aum was several dozen; again, a terrible human cost, but not an existential threat. At no time was the territorial integrity of Japan threatened. At no time was the operational integrity of the Japanese government threatened. At no time was the day-to-day operation of the Japanese economy meaningfully threatened. The threat to the average Japanese citizen was effectively nil.

Just as important was what the Japanese government and people did not do. They didn't panic. They didn't make sweeping changes to their way of life. They didn't implement a vast system of domestic surveillance. They didn't suspend basic civil rights. They didn't begin to capture, torture, and kill without due process. They didn't, in other words, allow themselves to be terrorized. Instead, they addressed the threat. They investigated and arrested the cult's leadership. They tried them in civilian courts and earned convictions through due process. They buried their dead. They mourned. And they moved on. In every sense, it was a rational, adult, mature response to a terrible terrorist act, one that remained largely in keeping with liberal democratic ideals.

05 Jun 16:39

A Couple of Use Cases for Calc()

by Chris Coyier

calc() is a native CSS way to do simple math right in CSS as a replacement for any length value (or pretty much any number value). It has four simple math operators: add (+), subtract (-), multiply (*), and divide (/). Being able to do math in code is nice and a welcome addition to a language that is fairly number heavy.

But is it useful? I've strained my brain in the past trying to think of obviously useful cases. There definitely are some though.

Can't Preprocessors Do Our Math?

All CSS Preprocessors do have math functions and they are pretty useful. But they aren't quite as powerful as native math. The most useful ability of calc() is it's ability to mix units, like percentages and pixels. No Preprocessor will ever be able to do that. It is something that has to happen at render time.

Syntax

.thing {
  width: 90%; /* fallback if needed */
  width: calc(100% - 3em);
}

There must be spaces surrounding the math operator. You can nest.

Browser Support

It is surprisingly good. Can I use... is always great for checking out the details there. On desktop the concerns would be it's IE 9+, Safari 6+, and won't be in Opera until it is on Blink in 15+. On mobile, Android and Opera Mini don't support it at all yet and iOS just on 6.0+.

You'll have to make the call there. I've been able to actually use it in production in certain scenarios already.

Use Case #1: (All The Height - Header)

A block level child element with height: 100% will be as tall as it's block level parent element. It can be nice to make a colored module as tall as the parent element in some cases.

But now let's say the parent element becomes too small to contain all the content in the module. You want the content to scroll, but you want just the content to scroll, not the entire module. Just set overflow-y: auto; right? Not quite, because overflow-y is only useful if the content element itself has a set height that can be overflowed. We can't make the content element 100% high because with the header there, that will be too high. We need 100% minus the height of the header. If we know that header height, it's doable!

* {
  /* So 100% means 100% */
  box-sizing: border-box;
}
html, body {
  /* Make the body to be as tall as browser window */
  height: 100%;
  background: #ccc;
  padding: 20px;
}
body {
  padding: 20px;
  background: white;  
}
.area-one {
  /* With the body as tall as the browser window
     this will be too */
  height: 100%;
}
.area-one h2 {
  height: 50px;
  line-height: 50px;
}
.content {
  /* Subtract the header size */
  height: calc(100% - 50px);
  overflow: auto;
}

You might gripe that the header shouldn't have a fixed size. Might be cool someday if calc() could subtract measured sizes of elements, but that's not possible yet. You could set the header to overflow with ellipsis.

Check out this Pen!

Use Case #2: X Pixels From Bottom Right Corner

We can position background-image X pixels from the top-left corner easily.

background-image: url(dog.png);
background-position: 50px 20px;

That would put the dog 50px from the left and 20px from the top of the elements box. But what if you want it 50px from the right and 20px from the bottom? Not possible with just straight length values. But calc() makes it possible!

background-image: url(dog.png);
background-position: calc(100% - 50px) calc(100% - 20px);
Check out this Pen!

Use Case #3: Fixed Gutters Without Parents

Let's say you want two columns next to each other. The first 40% wide, the second 60%, but with a fixed 1em gap between the columns. Don't Overthink It Grids have fixed gutters, but they aren't true gutters in a sense. The columns themselves bump right into each other and the columns are made by internal padding inside those columns.

Using calc(), we can make the first column 40% wide with a right margin of 1em, then make the second column 60% wide minus that 1em.

.area-one {
  width: 40%;
  float: left;
  margin-right: 1em;
}

.area-two {
  width: calc(60% - 1em);
  float: right;
}

You could remove half the gutter from both if you wanted to keep the proportion more accurate. Now you have two true columns separated by fixed space without needing parent elements or using the internal padding.

Check out this Pen!

Use Case #4: Showing Math Is Easier To Understand

Speaking of columns, sometimes division math gets messy. Let's say you wanted a 7-column grid, you might have classes like:

.column-1-7 {
   width: 14.2857%
}
.column-2-7 {
   width: 28.5714%
}
.column-3-7 {
   width: 42.8571%
}

Not exactly magic numbers, but difficult to understand at a glance.

.column-1-7 {
   width: calc(100% / 7);
}
.column-2-7 {
   width: calc(100% / 7 * 2);
}
.column-3-7 {
   width: calc(100% / 7 * 3);
}
Check out this Pen!

Use Case #5: Kinda Crappy box-sizing Replacement

I'm a fan of universal box-sizing: border-box; because it means you don't have to do much math to figure out how big an element actually is, our adjust that math when things like border and padding change.

If you want to replicate what box-sizing does, you could use calc() to subtract the values as needed.

.module {
  padding: 10px;

  /* Same as box-sizing: padding-box */
  width: calc(40% - 20px);

  border: 2px solid black;

  /* Same as box-sizing: border-box */
  width: calc(40% - 20px - 4px);
}

box-sizing has far better browser support than calc() though, so this would be rarely used.

The Future?

I think it will be interesting when we can use the attr() function in places other than the content property. With that, we could yank the value from HTML elements, run calculations on them, and use the new numbers to do design-y things. Like colorize inputs based on the numbers they contain.

Perhaps we could even use it to do fancy things with the <progress> elements like turn it into a speedometer like on this page. Perhaps something like:

/* Not real */
progress::progress-bar {
  transform: rotate(calc(!parent(attr(value))*18)) + deg);
}


Need a new look for your portfolio? Check out the Snap WordPress theme from The Theme Foundry. Sass files and Compass config are included!


A Couple of Use Cases for Calc() is a post from CSS-Tricks

04 Jun 23:51

The Problems with CALEA-II

by Bruce Schneier

The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it's really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won't do much to hinder actual criminals and terrorists.

As the FBI sees it, the problem is that people are moving away from traditional communication systems like telephones onto computer systems like Skype. Eavesdropping on telephones used to be easy. The FBI would call the phone company, which would bring agents into a switching room and allow them to literally tap the wires with a pair of alligator clips and a tape recorder. In the 1990s, the government forced phone companies to provide an analogous capability on digital switches; but today, more and more communications happens over the Internet.

What the FBI wants is the ability to eavesdrop on everything. Depending on the system, this ranges from easy to impossible. E-mail systems like Gmail are easy. The mail resides in Google's servers, and the company has an office full of people who respond to requests for lawful access to individual accounts from governments all over the world. Encrypted voice systems like Silent Circle are impossible to eavesdrop on—the calls are encrypted from one computer to the other, and there's no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI's proposal. Companies that refuse to comply would be fined $25,000 a day.

The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else's eavesdropping. That's just not possible. It's impossible to build a communications system that allows the FBI surreptitious access but doesn't allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.

This is an old debate, and one we've been through many times. The NSA even has a name for it: the equities issue. In the 1980s, the equities debate was about export control of cryptography. The government deliberately weakened U.S. cryptography products because it didn't want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan "Don't buy the U.S. stuff -- it's lousy."

In 1993, the debate was about the Clipper Chip. This was another deliberately weakened security product, an encrypted telephone. The FBI convinced AT&T to add a backdoor that allowed for surreptitious wiretapping. The product was a complete failure. Again, why would anyone buy a deliberately weakened security system?

In 1994, the Communications Assistance for Law Enforcement Act mandated that U.S. companies build eavesdropping capabilities into phone switches. These were sold internationally; some countries liked having the ability to spy on their citizens. Of course, so did criminals, and there were public scandals in Greece (2005) and Italy (2006) as a result.

In 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system. And just this May, we learned that Chinese hackers breached Google's system for providing surveillance data for the FBI.

The new FBI proposal will fail in all these ways and more. The bad guys will be able to get around the eavesdropping capability, either by building their own security systems -- not very difficult -- or buying the more-secure foreign products that will inevitably be made available. Most of the good guys, who don't understand the risks or the technology, will not know enough to bother and will be less secure. The eavesdropping functions will 1) result in more obscure -- and less secure -- product designs, and 2) be vulnerable to exploitation by criminals, spies, and everyone else. U.S. companies will be forced to compete at a disadvantage; smart customers won't buy the substandard stuff when there are more-secure foreign alternatives. Even worse, there are lots of foreign governments who want to use these sorts of systems to spy on their own citizens. Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?

The FBI's short-sighted agenda also works against the parts of the government that are still working to secure the Internet for everyone. Initiatives within the NSA, the DOD, and DHS to do everything from securing computer operating systems to enabling anonymous web browsing will all be harmed by this.

What to do, then? The FBI claims that the Internet is "going dark," and that it's simply trying to maintain the status quo of being able to eavesdrop. This characterization is disingenuous at best. We are entering a golden age of surveillance; there's more electronic communications available for eavesdropping than ever before, including whole new classes of information: location tracking, financial tracking, and vast databases of historical communications such as e-mails and text messages. The FBI's surveillance department has it better than ever. With regard to voice communications, yes, software phone calls will be harder to eavesdrop upon. (Although there are questions about Skype's security.) That's just part of the evolution of technology, and one that on balance is a positive thing.

Think of it this way: We don't hand the government copies of our house keys and safe combinations. If agents want access, they get a warrant and then pick the locks or bust open the doors, just as a criminal would do. A similar system would work on computers. The FBI, with its increasingly non-transparent procedures and systems, has failed to make the case that this isn't good enough.

Finally there's a general principle at work that's worth explicitly stating. All tools can be used by the good guys and the bad guys. Cars have enormous societal value, even though bank robbers can use them as getaway cars. Cash is no different. Both good guys and bad guys send e-mails, use Skype, and eat at all-night restaurants. But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses. Strong Internet security makes us all safer, even though it helps the bad guys as well. And it makes no sense to harm all of us in an attempt to harm a small subset of us.

This essay originally appeared in Foreign Policy.

04 Jun 17:49

Dothraki Weddings vs. Westerosi Weddings (GOT SEASON 3 SPOILERS!)

SPOILERS and observations on weddings in Game of Thrones are shared

Dothraki Weddings vs. Westerosi Weddings (GOT SEASON 3 SPOILERS!)

Follow updates on Facebook, Twitter, Google+, or Tumblr

signed Rock, Paper, Cynic prints

Haha! I totally (sort of) called it!

Of course, GRRM published the contents of last night's episode 13 years ago in a spoiler that used to be known as a book series.

03 Jun 18:15

Grids with text-align: justify

by Chris Coyier

Patrick Kunka demos how fluid grids can be created with percentage widths and justified text. I like it because you don't need to think about gutter calculations which is what really complicates grids. If you need specific control over gutters, Don't Overthink It Grids might help.

Direct Link to ArticlePermalink


Need a new look for your portfolio? Check out the Snap WordPress theme from The Theme Foundry. Sass files and Compass config are included!


Grids with text-align: justify is a post from CSS-Tricks

30 May 20:22

Beaconian tells tale of two robberies

by Wendi Dunlap

Image by Anders Sandberg via Creative Commons/Flickr.

The Stranger has an article this week by Tobias Coughlin-Bogue: “Mugged at Gunpoint on Beacon Hill.” In the article, Coughlin-Bogue describes being robbed twice on North Beacon Hill, and his thoughts on revenge and gun ownership.

“About two blocks into our trek, a cop rolled by, and we flagged him down. He made a very unconvincing effort to track our assailants, giving the distinct impression that he believed our cause lost from the get-go. He also made sure to let us know, “If they tried to rob me, I would have pulled out my gun and asked them, ‘How badly do you want my stuff?’” I wasn’t in the mood to point out how incredibly foolish that statement was, so I just let it slide, borrowed his cell phone to get ahold of our roommate so we could get into the house, and let him drive us home.”

Have you been mugged on the Hill? Do you feel the police response was adequate? And what do you do to stay safe when walking around?

25 May 13:14

Training Baggage Screeners

by Bruce Schneier

The research in G. Giguère and B.C. Love, "Limits in decision making arise from limits in memory retrieval," Proceedings of the National Academy of Sciences v. 19 (2013) has applications in training airport baggage screeners.

Abstract: Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.