Shared posts

18 Nov 12:47

Air Pollution Reduces IQ, a Lot

by Alex Tabarrok

The number and quality of studies showing that air pollution has very substantial effects on health continues to increase. Patrick Collison reviews some of the most recent studies on air pollution and cognition. I’m going to post the whole thing so everything that follows is Patrick’s.

—-

Air pollution is a very big deal. Its adverse effects on numerous health outcomes and general mortality are widely documented. However, our understanding of its cognitive costs is more recent and those costs are almost certainly still significantly under-emphasized. For example, cognitive effects are not mentioned in most EPA materials.

World Bank data indicate that 3.7 billion people, about half the world’s population, are exposed to more than 50 µg/m³ of PM2.5 on an annual basis, 5x the unit of measure for most of the findings below.

  • Substantial declines in short-term cognitive performance after short-term exposure to moderate (median 27.0 µg/m³) PM2.5 pollution: “The results from the MMSE test showed a statistically robust decline in cognitive function after exposure to both the candle burning and outdoor commuting compared to ambient indoor conditions. The similarity in the results between the two experiments suggests that PM exposure is the cause of the short-term cognitive decline observed in both.” […] “The mean average [test scores] for pre and post exposure to the candle burning were 48 ± 16 and 40 ± 17, respectively.” – Shehab & Pope 2019.
  • Chess players make more mistakes on polluted days: “We find that an increase of 10 µg/m³ raises the probability of making an error by 1.5 percentage points, and increases the magnitude of the errors by 9.4%. The impact of pollution is exacerbated by time pressure. When players approach the time control of games, an increase of 10 µg/m³, corresponding to about one standard deviation, increases the probability of making a meaningful error by 3.2 percentage points, and errors being 17.3% larger.” – Künn et al 2019.
  • A 3.26x (albeit with very wide CI) increase in Alzheimer’s incidence for each 10 µg/m³ increase in long-term PM2.5 exposure? “Short- and long-term PM2.5 exposure was associated with increased risks of stroke (short-term odds ratio 1.01 [per µg/m³ increase in PM2.5 concentrations], 95% CI 1.01-1.02; long-term 1.14, 95% CI 1.08-1.21) and mortality (short-term 1.02, 95% CI 1.01-1.04; long-term 1.15, 95% CI 1.07-1.24) of stroke. Long-term PM2.5 exposure was associated with increased risks of dementia (1.16, 95% CI 1.07-1.26), Alzheimer’s disease (3.26, 95% 0.84-12.74), ASD (1.68, 95% CI 1.20-2.34), and Parkinson’s disease (1.34, 95% CI 1.04-1.73).” – Fu et al 2019. Similar effects are seen in Bishop et al 2018: “We find that a 1 µg/m³ increase in decadal PM2.5 increases the probability of a dementia diagnosis by 1.68 percentage points.”
  • A study of 20,000 elderly women concluded that “the effect of a 10 µg/m³ increment in long-term [PM2.5 and PM10] exposure is cognitively equivalent to aging by approximately 2 years”. – Weuve et al 2013.
  • “Utilizing variations in transitory and cumulative air pollution exposures for the same individuals over time in China, we provide evidence that polluted air may impede cognitive ability as people become older, especially for less educated men. Cutting annual mean concentration of particulate matter smaller than 10 µm (PM10) in China to the Environmental Protection Agency’s standard (50 µg/m³) would move people from the median to the 63rd percentile (verbal test scores) and the 58th percentile (math test scores), respectively.” – Zhang et al 2018.
  • “Exposure to CO2 and VOCs at levels found in conventional office buildings was associated with lower cognitive scores than those associated with levels of these compounds found in a Green building.” – Allen et al 2016. The effect seems to kick in at around 1,000 ppm of CO2.

Alex again. Here’s one more. Heissel et al. (2019):

“We compare within-student achievement for students transitioning between schools near highways, where one school has had greater levels of pollution because it is downwind of a highway. Students who move from an elementary/middle school that feeds into a “downwind” middle/high school in the same zip code experience decreases in test scores, more behavioral incidents, and more absences, relative to when they transition to an upwind school”

Relatively poor countries with extensive air pollution–such as India–are not simply choosing to trade higher GDP for worse health; air pollution is so bad that countries with even moderate air pollution are getting lower GDP and worse heath.

Addendum: Patrick has added a few more.

The post Air Pollution Reduces IQ, a Lot appeared first on Marginal REVOLUTION.

20 Oct 17:45

Marx vs Mises: Rap Battle

by Alex Tabarrok

From Emergent Order the studio who brought you Keynes v. Hayek (and round two) comes Marx v. Mises. All hail to anyone who can rap, “Now here comes the bomb via Von Bohm-Bawerk.” I was also pleased to see Marginal Revolution makes an appearance as does the socialist calculation debate. Useful background notes here.

The post Marx vs Mises: Rap Battle appeared first on Marginal REVOLUTION.

13 Aug 20:40

How Monopolies Broke the Federal Reserve

01 Jun 21:23

The dangerous folly of “Software as a Service”

by Eric Raymond

Comes the word that Saleforce.com has announced a ban on its customers selling “military-style rifles”.

The reason this ban has teeth is that the company provides “software as a service”; that is, the software you run is a client for servers that the provider owns and operates. If the provider decides it doesn’t want your business, you probably have no real recourse. OK, you could sue for tortious interference in business relationships, but that’s chancy and anyway you didn’t want to be in a lawsuit, you wanted to conduct your business.

This is why “software as a service” is dangerous folly, even worse than old-fashioned proprietary software at saddling you with a strategic business risk. You don’t own the software, the software owns you.

It’s 2019 and I feel like I shouldn’t have to restate the obvious, but if you want to keep control of your business the software you rely on needs to be open-source. All of it. All of it. And you can’t afford it to be tethered to a service provider even if the software itself is nominally open source.

Otherwise, how do you know some political fanatic isn’t going to decide your product is unclean and chop you off at the knees? It’s rifles today, it’ll be anything that can be tagged “hateful” tomorrow – and you won’t be at the table when the victim-studies majors are defining “hate”. Even if you think you’re their ally, you can’t count on escaping the next turn of the purity spiral.

And that’s disregarding all the more mundane risks that come from the fact that your vendor’s business objectives aren’t the same as yours. This is ground I covered twenty years ago, do I really have to put on the Mr. Famous Guy cape and do the rubber-chicken circuit again? Sigh…

Business leaders should learn to fear every piece of proprietary software and “service” as the dangerous addictions they are. If Salesforce.com’s arrogant diktat teaches that lesson, it will have been a service indeed.

30 May 14:44

An Exercise Program for the Fat Web

by Jeff Atwood

When I wrote about App-pocalypse Now in 2014, I implied the future still belonged to the web. And it does. But it's also true that the web has changed a lot in the last 10 years, much less the last 20 or 30.

fat city

Websites have gotten a lot … fatter.

While I think it's irrational to pine for the bad old days of HTML 1.0 websites, there are some legitimate concerns here. The best summary is Maciej Cegłowski's The Website Obesity Crisis.

To channel a famous motivational speaker, I could go out there tonight, with the materials you’ve got, and rewrite the sites I showed you at the start of this talk to make them load in under a second. In two hours.

Can you? Can you?

Of course you can! It’s not hard! We knew how to make small websites in 2002. It’s not like the secret has been lost to history, like Greek fire or Damascus steel.

But we face pressure to make these sites bloated.

I bet if you went to a client and presented a 200 kilobyte site template, you’d be fired. Even if it looked great and somehow included all the tracking and ads and social media crap they insisted on putting in. It’s just so far out of the realm of the imaginable at this point.

The whole article is essential; you should stop what you're doing and read it now if you haven't already. But if you don't have time, here's the key point:

This is a screenshot from an NPR article discussing the rising use of ad blockers. The page is 12 megabytes in size in a stock web browser. The same article with basic ad blocking turned on is 1 megabyte.

That's right, through the simple act of running an ad blocker, you've reduced that website's payload by twelve times. Twelve! That's like the most effective exercise program ever!

Even the traditional advice to keep websites lean and mean for mobile no longer applies because new mobile devices, at least on the Apple side, are faster than most existing desktops and laptops.

Despite claims to the contrary, the bad guy isn't web bloat, per se. The bad guy is advertising. Unlimited, unfettered ad "tech" has creeped into everything and subsumed the web.

Personally I don't even want to run ad blockers, and I didn't for a long time – but it's increasingly difficult to avoid running an ad blocker unless you want a clunky, substandard web experience. There's a reason the most popular browser plugins are inevitably ad blockers, isn't there? Just ask Google:

chrome-best-extensions-google-search

So it's all the more surprising to learn that Google is suddenly clamping down hard on adblockers in Chrome. Here's what the author of uBlock Origin, an ad blocking plugin for Chrome, has to say about today's announcement:

In order for Google Chrome to reach its current user base, it had to support content blockers — these are the top most popular extensions for any browser. Google strategy has been to find the optimal point between the two goals of growing the user base of Google Chrome and preventing content blockers from harming its business.

The blocking ability of the webRequest API caused Google to yield control of content blocking to content blockers. Now that Google Chrome is the dominant browser, it is in a better position to shift the optimal point between the two goals which benefits Google's primary business.

The deprecation of the blocking ability of the webRequest API is to gain back this control, and to further instrument and report how web pages are filtered, since the exact filters which are applied to web pages are useful information which will be collectable by Google Chrome.

The ad blockers themselves are arguably just as complicit. Eye/o GmbH owns AdBlock and uBlock, employs 150 people, and in 2016 they had 50 million euros in revenue, of which about 50% was profit. Google's paid "Acceptable Ads" program is a way to funnel money into adblockers to, uh, encourage them to display certain ads. With money. Lots … and lots … of money. 🤑

We simultaneously have a very real web obesity crisis, and a looming crackdown on ad blockers, seemingly the only viable weight loss program for websites. What's a poor web citizen to do? Well, there is one thing you can do to escape the need for browser-based adblockers, at least on your home network. Install and configure Pi-Hole.

pi-hole-screenshot

I've talked about the amazing Raspberry Pi before in the context of classic game emulation, but this is another brilliant use for a Pi.

Here's why it's so cool. If you disable the DHCP server on your router, and let the Pi-Hole become your primary DHCP server, you get automatic DNS based blocking of ads for every single device on your network. It's kind of scary how powerful DNS can be, isn't it?

pi-hole-action-shot

My Pi-Hole took me about 1 hour to set up, start to finish. All you need is

I do recommend the 3b+ because it has native gigabit ethernet and a bit more muscle. But literally any Raspberry Pi you can find laying around will work, though I'd strongly advise you to pick one with a wired ethernet port since it'll be your DNS server.

I'm not going to write a whole Pi-Hole installation guide, because there are lots of great ones out there already. It's not difficult, and there's a slick web GUI waiting for you once you complete initial setup. For your initial testing, pick any IP address you like on your network that won't conflict with anything active. Once you're happy with the basic setup and web interface:

  • Turn OFF your router's DHCP server – existing leases will continue to work, so nothing will be immediately broken.
  • Turn ON the pi-hole DHCP server, in the web GUI.

pi-hole-dhcp-server

Once you do this, all your network devices will start to grab their DHCP leases from your Pi-Hole, which will also tell them to route all their DNS requests through the Pi-Hole, and that's when the ✨ magic ✨ happens!

pi-hole-blacklists

All those DNS requests from all the devices on your network will be checked against the ad blacklists; anything matching is quickly and silently discarded before it ever reaches your browser.

pi-hole-dashboard-stats

(The Pi-Hole also acts as a caching DNS server, so repeated DNS requests will be serviced rapidly from your local network, too.)

If you're worried about stability or reliability, you can easily add a cheap battery backed USB plug, or even a second backup Pi-Hole as your secondary DNS provider if you prefer belt and suspenders protection. Switching back to plain boring old vanilla DNS is as easy as unplugging the Pi and flicking the DHCP server setting in your router back on.

At this point if you're interested (and you should be!), just give it a try. If you're looking for more information, the project has an excellent forum full of FAQs and roadmaps.

pi-hole-forums

You can even vote for your favorite upcoming features!

I avoided the Pi-Hole project for a while because I didn't need it, and I'd honestly rather jump in later when things are more mature.

pi-hole-pin

With the latest Chrome crackdown on ad blockers, now is the time, and I'm impressed how simple and easy Pi-Hole is to run. Just find a quiet place to plug it in, spend an hour configuring it, and promptly proceed to forget about it forever as you enjoy a lifetime subscription to a glorious web ad instant weight loss program across every single device on your network with (almost) zero effort!

Finally, an exercise program I can believe in.

14 Mar 12:18

Fed vs. Narrow Banks

by noreply@blogger.com (John H. Cochrane)
Suppose an entrepreneur came up with a plan for a financial institution that is completely safe -- it can never fail, it can never suffer a run, it offers depositors perfect safety with no need for deposit insurance, asset risk regulation, capital requirements, or the rest, and it pays depositors more interest than they can get elsewhere.

Narrow banks are such institutions.  They take deposits and invest the proceeds in interest-bearing reserves at the Fed. They pay depositors that interest, less a small profit margin. Pure and simple. Economists have been calling for narrow banks since at least the 1930s.

You would think that the Fed would welcome narrow banks with open arms.

You would be wrong.

The latest chapter in the Fed's determined effort to quash The Narrow Bank (TNB) and at least one other effort to start a narrow bank is unfolding. (Previous posts here and here.)

Last year, TNB sued the Fed for refusing to allow TNB an account at the Fed at all. The Fed has just now filed a motion to dismiss the suit. The Fed has also issued an advance notice of proposed rule making, basically announcing that it would, on a discretionary basis, refuse to pay interest on reserves to any narrow bank. In case anyone gets a bright idea to take a small bank that already has a master account and turn it in to a narrow bank, thereby avoiding TNB's legal imbroglio, take note, the Fed will pull the rug out from under you.

I find both documents outrageous. The Fed is acting as a classic captured regulator, defending the oligopoly position of big banks against unwelcome competition, its ability to thereby coerce banks to do its bidding, and to run a grand regulatory bureaucracy, against competitive upstarts that will provide better products for the economy, threaten the systemically dangerous big bank oligopoly, and reduce the need for a large staff of Fed regulators.

I state that carefully, "acting as." It is my firm practice never to allege motives, a habit I find particularly annoying among a few other economics bloggers. Everyone I know at the Fed is a thoughtful and devoted public servant and I have never witnessed a whiff of such overt motives among them. Yet institutions can act in ways that people in them do not perceive. And certainly if one had such an impression of the Fed, which a wide swath of observers from the Elizabeth Warren left to  Cato Institute anti-crony capitalism libertarians do, nothing in these documents will dissuade them from such a malign view of the institution's motives, and much will reinforce it.  

On the outrage scale, the first paragraph of the Fed's motion to dismiss takes the cake:


Plaintiff TNB USA Inc. (“TNB”) asks the Court for a declaratory judgment and an injunction compelling Defendant Federal Reserve Bank of New York... to accept deposits from TNB so that it can arbitrage a critical interest rate the Federal Reserve uses to fulfill its statutory mandate to set and execute United States monetary policy. ... TNB seeks to open a deposit account at the New York Fed not so that it can engage in the typical business of banking, but solely so that TNB can park the funds of its wealthy, institutional depositors in the account and pass TNB’s IOER earnings on to them, after taking a cut for itself.
(Emphasis mine.) The perfectly normal business of taking deposits, and investing them is now maligned as "arbitrage." Moreover, this is precisely what a large swath of the banking and financial system is doing right now. Government agencies cannot invest in reserves, so they are depositing money with banks at rates you and I can't get, with those funds going straight to reserves. Many of the interest paying reserves are going through foreign banks this way. The Fed allows money market funds to invest in reserves through its reverse repo program. Apparently money market funds investing in reserves is fine, but a bank doing exactly the same thing is a disparaged "arbitrage."

But savor the last sentence. Wealthy? The Fed is now in the income-redistribution business, dog-whistling inequality, and "wealthy" investors legal rights are to be disparaged? Is no-one "wealthy" or earning profits among the management or customer base of, say Chase? (Actually, there is a reason that TNB set up only to take institutional money: Not love of the wealthy, but  to avoid the regulatory costs imposed by the Fed if one takes retail deposits.)

"The typical business of banking" reveals a Fed view reinforced elsewhere in the documents. The Fed  apparently does view its habit of paying large interest on reserves as a subsidy to banks who receive them, and it expects banks to use that money to cross subsidize other activities according to the Fed's wishes. That narrow banks might undercut such cross-subsidies is clearly its major concern.

Much of the rest concerns the legal question whether TNB can sue the Fed. Most of this is not my expertise or relevant to the policy questions. Some is strained to the point of hilarity -- if you're not TNB.
TNB does not have standing to bring this action. TNB’s application for an account at the New York Fed is still under consideration. TNB thus has suffered no injury in fact,
The Fed, like many regulators faced with uncomfortable decisions, has chosen the path of endless delay -- precisely why TNB is suing them. If it delays long enough, then TNB will run out of money and give up. This is a larger issue in all regulatory agencies, and a good reason for a shot clock regulatory reform.

It goes on to the idea that the Fed's discretion to pay interest extends to discretion to treat individual banks differently based on the Fed's unlimited discretion to reward business models it likes and punish business models it doesn't like. I hope the court stamps that one out forcefully. And finally it reiterates the incoherent policy arguments of the proposed rule.

The advance notice of proposed rule making is an even more revealing document. In an era of supposedly science-based policy it contains nothing more than speculation -- might, could, may, -- of vague possible problems, and offers no argument or evidence, just assertion of the Fed's "beliefs" against standard arguments for Narrow banks. At least it acknowledges the latter exist.

It starts with subtle denigration. The notice calls them
narrowly focused depository institutions (Pass-Through Investment Entities or PTIEs) 
refusing to use the word "bank," which is what they are. They are state-chartered banks, with every legal right to have accounts at the Fed.

Obtaining a master account would
enable PTIEs to earn interest on their balances at a Reserve Bank at the IORR and IOER rate, yet at the same time avoid the costs borne by other eligible institutions, such as the costs of capital requirements and the other elements of federal regulation and supervision, because of the limited scope of their product offerings and asset types.
You get the sense right away. Wait, that's unfair competition! They don't have to pay big regulatory costs! Indeed they don't, and because the regulations are somewhat sensible and recognize that narrow banks pose absolutely zero systemic danger. That they can avoid regulatory costs is a plus, not a minus! You can see how a reader infers the Fed wants to protect big banks from "unfair" competition.

The Fed's ostensible arguments are different,
The Board is concerned that PTIEs, ...have the potential to complicate the implementation of monetary policy.... the Board is concerned that PTIEs could disrupt financial...intermediation in ways that are hard to anticipate, and could also have a negative effect on financial stability,....
 Concerned. Have the potential too. Could disrupt. Hard to anticipate. Could have. On that basis we write rules denying a financial innovation that has been advocated for nearly a century, and has the potential to end financial crises forever?

Let's look at the arguments.

1. "Complicate" monetary policy
some market participants have argued that the presence of PTIEs could help the implementation of monetary policy. ... the activities of PTIEs could narrow the spread between short-term rates and the IOER rate, potentially strengthening the ability of the Federal Reserve to manage the level of short-term interest rates.
Count me in! Right now, interest on excess reserves is not spreading uniformly throughout the financial system. Banks are remarkably uncompetitive. For example, when the Fed was paying 0.25% on reserves, my bank, Chase, paid 0.01% on my checking account. Now that Chase is getting 2.4% on its abundant reserves, and all market rates have risen accordingly, Chase is paying....


the same lousy 0.01% on checking accounts. Well, obviously there is not much competition for deposits. Competition from narrow banks would force interest rates closer to the Fed's IOER. The same is true for the rather puzzling spreads between IOER, treasury rates, and various overnight rates for institutions that aren't Banks. Arbitrage indeed. Arbitrage is good -- it forces rates together.

The Fed says nothing to counter this argument. Instead, it offers a falsehood and an easily contradicted worry about balance sheets.
The viability of the PTIE business model relies on the IOER rate being slightly above the level of certain other key overnight money market rates. 
First, this is false. Anything more than the 0.01% Chase is paying will make a narrow bank go. TNB, the obvious target of all this, was indeed trying to offer money market funds a bit more than they can get by their deposits at the Fed. But the newer narrow banks are going after large commercial and retail deposits that don't care about that small spread.

But just why is the Fed holding IOER above market rates, and above the rate it pays money market funds, contrary to the clear statement of the law? If this is the problem, the Fed can just offer money market funds IOER and PTIEs would disappear. It is the Fed's desire to pay big banks a bit more than everyone else causing the problem, if there is one, in the first place.
The ability of PTIEs to attract a very large amount of deposits...could affect the FOMC’s plans to reduce its balance sheet to the smallest level consistent with efficient and effective implementation of monetary policy. ... In order to maintain the desired stance of monetary policy, the Federal Reserve would likely need to accommodate this demand by expanding its balance sheet and the supply of reserves. 
This just makes no sense at all. The Fed controls the supply of reserves, period. If the Fed refuses to buy assets, reserves are what they are, and other interest rates adjust. To bring money to a narrow bank, a depositor must tell his or her current bank to transfer reserves to the narrow bank. The current bank may have to sell assets, driving up interest rates until market rates equal the rate on reserves. When quantities do not adjust prices do. You can tell confusion by bureaucratese. What is the "stance of monetary policy" here if not the interest rate on reserves and the balance sheet?

Anyway, why is the size of the balance sheet important? The Fed is doing a very curious dance, trying to set a price (the interest rate) and a quantity at the same time. If the Fed wants to set interest rates at, say, 2.4%, it should say "we trade short term treasuries for reserves at 2.4%. Buy and sell, come and get them." It should say that to anyone. Why is a large short-term treasuries only balance sheet a problem?

Continuing,
PTIEs could be an attractive investment for lenders in short-term funding markets such as the federal funds market. If the current lenders in the federal funds market shifted much of their overnight investment to deposits at PTIEs, the federal funds rate could become volatile. Such a development could require the FOMC to change its policy target on relatively short notice. Moreover, a marked change in the volatility of the federal funds rate could have spillover effects in many other markets that are linked to the federal funds rate such as federal funds futures, overnight index swaps, and floating-rate bank loans.
Could. If. Could. Could. Could. Is there a single fact here other than rampant speculation, of the solidity to deny people the right to start a perfectly legal business? Moreover, the analysis is airy speculation. Federal funds is where banks lend money overnight to other banks. The Federal funds market is already essentially dead, because banks are holding a huge supply of reserves. That's a good thing. Lenders in the federal funds market are already banks, and they can access reserves! What fed funds lender is going to give money to another bank to invest in reserves rather than do so directly! Contrariwise, arbitrage makes prices more not less equal. Why in the world would more access to IOER make any other rate more volatile. And if Fed funds become more volatile, let indexes move to Libor, general collateral, or other rates. Do we forbid promising businesses to start because current indexing contracts are written in stone somewhere? 

Financial intermediation (and protecting the banks)
Deposits at PTIEs, as noted above, could become attractive investments for many lenders in overnight funding markets. Lenders in the overnight general collateral (“GC”) repo market could find PTIE deposits more attractive than continued activity in the overnight GC repo market. 
That's the whole point! Has the Fed forgotten October 2008 when the repo market froze and we had a little financial crisis? Money invested in repos leads to financial crises. Money invested in narrow banks cannot spark financial crises. The Fed should be cheering a demise of the repo market in favor of narrow banks!

What does the Fed have to say? 
If the rise of PTIEs were to reduce demand for GC repo lending, securities dealers could find it more costly to finance their inventories of Treasury securities. 
Aw, gee. Isn't this the same Fed that was railing at "wealthy" investors making "arbitrage" profits? 
PTIEs could also diminish the availability of funding for commercial banks generally. To the extent that deposits at PTIEs are seen as a more attractive investment for cash investors that currently hold bank deposits, these investors could shift some of their investments from deposits issued by banks to deposits with PTIEs. This shift in investment, in turn, could raise bank funding costs and ultimately raise the cost of credit provided by banks to households and businesses.
Now we're getting somewhere. Here it is boldface: The Fed is subsidizing commercial banks by paying interest on reserves, allowing the banks to pay horrible rates on deposits, because the Fed thinks out of banks'  generosity -- or regulatory pressure -- banks will turn around and cross-subsidize lending to households and businesses rather than just pocket the spread themselves. 

Regulators forever have stifled competition to try to create cross-subsidies. Airline regulators thought upstart airlines would skim the cream of New York to Chicago flights and undermine cross-subsidies to smaller cities. Telephone regulators thought competitive long-distance would undermine cross-subsidies to residential landlines. 

The long-learned lesson elsewhere is that regulation should not try to enforce cross-subsidies, especially by banning competition. You and I should not be forced to earn low deposit rates, and innovative businesses stopped from serving us, if the Fed wants to subsidize lending. 

And needless to say, buttressing big bank profits, stopping competition, and then hoping the big banks turn around to pass on the cross subsidy to lenders rather than take it as profits, is nowhere in the Fed's charter. 

If deposits flee to narrow banks, then let banks raise money with long-term debt and equity. The transition to a run free financial system will happen on its own, and we will never have financial crises again. If the US government wishes to subsidize lending, let it do so by writing checks to borrowers, not through financial repression of depositors. 
Some have argued that the presence of PTIEs could play an important role in raising deposit rates offered by banks to their retail depositors.
Yes, me! See above snapshot. 
The potential for rates offered by PTIEs to have a meaningful impact on retail deposit rates, however, seems very low. ...retail deposit accounts have long paid rates of interest far below those offered on money market investments, reflecting factors such as bank costs in managing such retail accounts and the willingness of retail customers to forgo some interest on deposits for the perceived convenience or safety of maintaining balances at a bank rather than in a money market investment. 
That makes no sense at all. In 2012, Chase paid 0.01%, and IOER was 0.25%. In 2019, Chase is paying 0.01% and IOER is 2.4%. The costs of managing retail accounts have not risen 2.15 percentage points. Retail deposit rates are very slow moving because banking is very uncompetitive. Banking is very uncompetitive because the Fed has placed huge regulatory barriers in the face of competition. And it is in the process of doing so again. 

Financial stability. The big issue. 
Some have argued that deposits at PTIEs could improve financial stability because deposits at PTIEs, which would be viewed as virtually free of credit and liquidity risk, would help satisfy investors’ demand for safe money-like instruments. According to this line of argument, the growth of PTIEs could reduce the creation of private money-like assets that have proven to be highly vulnerable to runs and to pose serious risks to financial stability. Some might also argue that PTIE deposits could reduce the systemic footprint of large banks by reducing the relative attractiveness to cash investors of deposits placed at these large banks.
Yes, yes, a thousand times yes! By just allowing narrow banks, we will move to an equity-financed, run-free financial system. Economists have been calling for this since the Chicago Plan of the 1930s. What does the Fed have to offer?
The Board believes, however, that the emergence of PTIEs likely would have negative financial stability effects on net. Deposits at PTIEs could significantly reduce financial stability by providing a nearly unlimited supply of very attractive safe-haven assets during periods of financial market stress. PTIE deposits could be seen as more attractive than Treasury bills, because they would provide instantaneous liquidity, could be available in very large quantities, and would earn interest at an administered rate that would not necessarily fall as demand surges. As a result, in times of stress, investors that would otherwise provide short-term funding to nonfinancial firms, financial institutions, and state and local governments could rapidly withdraw that funding from those borrowers and instead deposit those funds at PTIEs. The sudden withdrawal of funding from these borrowers could greatly amplify systemic stress.
In short, in the face of nearly a century of careful thought about narrow and equity financed banking, the Fed has nothing coherent to offer, and only this will-o-wisp. This argument does not pass basic budget constraint, supply and demand thinking.

There is, if the Fed wishes, a fixed supply of reserves, and thus a fixed supply of narrow bank deposits. There is nowhere to run to. Moreover, the Fed does not try to fight runs by forcing investors to hold risky assets. In a crisis, the Fed is on the frontlines, buying assets and issuing reserves as fast as it can. The Fed itself makes the supply of reserves elastic in a crisis. If narrow banks could do this -- if they could take in risky assets and offer reserve-backed deposits in exchange -- the Fed should be cheering the final answer to the central cause of runs. Alas they cannot.

If narrow banks posed a systemic risk in this way, federal money market funds or treasury bills themselves would do so -- after all, people can (try to) run to those too in a crisis. The tiny differences between narrow bank deposits, reserves, federal money market funds, or underlying short-term treasuries are irrelevant, and whether some of the supply of treasuries flows through the fed, to narrow banks, to deposits, or flows through money market funds, to those investments, or are held directly by large banks is irrelevant, and certainly not a basis on which to deny one slightly different flavor of this intermediation.

Now it is true that in a crisis the fixed supply of reserves could drain from big banks to narrow banks, if once again despite the Fed's massive asset risk regulation the big banks lose a lot of money, if once again the Fed's liquidity and capital regulations prove ineffective, and so forth. But once again, we see the Fed protecting big bank's prerogative to continue in a systemically dangerous and massively leveraged way. And even then, the narrow bank option is so close to treasuries and money market funds, that the kinds of large institutional investors that would run can do so just as easily.

But the point is that investors already in PTIE deposits don't need to run anywhere. We trade a system based on a small amount of run-free government money and a large amount of run-prone private money for a system with a large amount of run-free government money (reserves, transmitted through narrow banks) and a small amount of run-prone private money (repo, overnight lending). There is a lot less run-prone shadow banking to run from in a crisis. We get rid of this whole crazy idea that risk management consists of "we'll sell assets on the way down."

The Fed ruminated over this argument when debating whether to allow money market funds access to  IOER. It seems the incoherence of the argument settled in, and now the Fed is happy with money market fund access to IOER. What is the difference between money market funds and narrow banks? Only that the latter threaten to compete away cheap funding for large commercial banks.
In addition to the foregoing, the Board is also seeking comment on the following questions: 
1. Has the Board identified all of the relevant public policy concerns associated with PTIEs? Are there additional public policy concerns that the Board should consider?
Yes. There is the perception, if not the reality, that the Fed is acting in the interests of big banks to enforce their oligopoly, to hold down deposit rates thereby boosting bank profits, and to needlessly expand its regulatory reach. Allowing narrow banks to compete mitigates all of these.
2. Are there public policy benefits of PTIEs that could outweigh identified concerns?
PTIEs, as you insist on calling them, have for 90 years been identified as the one crucial innovation that can end financial crises forever, just as the move from private banknotes to treasury notes ended forever runs on those notes in the 19th century. Deposits moving to narrow banks will lead regular commercial banks to a much higher equity position naturally, without the Fed having to push.

3. If the Board were to determine to pay a lower IOER rate to PTIEs, how should the Board define those eligible institutions to which a lower IOER rate should be paid?
The board should offer the same rate to all comers.
4. If the Board were to determine to pay a lower IOER rate to PTIEs, what approach should the Board adopt for setting the lower rate?
The board should not discriminate.
5. Are there any other limitations that could be applied to PTIEs that might increase the likelihood that such institutions could benefit the public while mitigating the public policy concerns outlined above?
None.

PS. Dear Fed: These vague, unscientific, speculative and incoherent arguments -- many of which would make easy spot-the-fallacy exam questions --  make you look foolish. They reinforce every negative stereotype that you serve the interests of big banks, that you have no idea how financial crises work.  Welcome narrow banks with open arms. Give them a non-systemic-danger medal.

Update: A last thought, not mentioned here. If anyone should worry about narrow banks it is money market funds, especially those that hold only treasuries. Treasures -> Fed -> reserves -> narrow bank -> consumer is better than Treasuries -> money market fund -> consumer. Money market funds take a day or two to settle, bank accounts are instant, and could now pay higher interest.

Here, I think the Fed should cheer. I bet it won't. Though the rise of money market funds caused a huge headache in the 1980s, the Fed seems to like them now. Some of the reason for a quarter percent rather than zero interest rate target was to keep money market funds alive, and the Fed now allows money market funds to invest in reserves. Doing it through narrow banks will engage the Fed's balance sheet (hold more treasuries, issue more reserves). Underlying all of this, the Fed seems determined to lower the size of its balance sheet, even though we have learned from the last 10 years that a large supply of reserves is a great thing. I think this desire is more political than economic, which is not to dismiss it.



29 Jan 14:59

Masculine Virtues

by Jacob Falkovich
Gillette and Roger Federer demonstrate the difference between traditional and toxic masculinity, who hates hierarchies and why, and the virtues learned by watching sports.
08 Dec 19:13

Emails Show Facebook Is Well Aware That Tracking Contacts Is Creepy

by John Gruber

Kashmir Hill, in an excellent piece for Gizmodo:

Then a man named Yul Kwon came to the rescue saying that the growth team had come up with a solution! Thanks to poor Android permission design at the time, there was a way to update the Facebook app to get “Read Call Log” permission without actually asking for it. “Based on their initial testing, it seems that this would allow us to upgrade users without subjecting them to an Android permissions dialog at all,” Kwon is quoted. “It would still be a breaking change, so users would have to click to upgrade, but no permissions dialog screen. They’re trying to finish testing by tomorrow to see if the behavior holds true across different versions of Android.”

Oh yay! Facebook could suck more data from users without scaring them by telling them it was doing it! This is a little surprising coming from Yul Kwon because he is Facebook’s chief ‘privacy sherpa,’ who is supposed to make sure that new products coming out of Facebook are privacy-compliant. I know because I profiled him, in a piece that happened to come out the same day as this email was sent. A member of his team told me their job was to make sure that the things they’re working on “not show up on the front page of the New York Times” because of a privacy blow-up. And I guess that was technically true, though it would be more reassuring if they tried to make sure Facebook didn’t do the creepy things that led to privacy blow-ups rather than keeping users from knowing about the creepy things.

The Facebook executives who approved this ought to be going to jail. Facebook is to privacy what Enron was to accounting.

07 Dec 17:21

Brexit and democracy

by noreply@blogger.com (John H. Cochrane)
Tyler Cowen has a very interesting Bloomberg column on Brexit. Essentially, he views the UK getting this right -- which I agree it does not seem to be doing -- as a crucial test of democracy. Tyler notes that the current agreement serves neither leave nor remain sides well.
Brexit nonetheless presents a decision problem in its purest form. It is a test of human ingenuity and reasonableness, of our ability to compromise and solve problems...
The huge barrier, of course, is the democratic nature of the government.... 
So many of humanity’s core problems — addressing climate change, improving education, boosting innovation — ultimately have the same structure as “fixing Brexit.” It’s just that these other problems come in less transparent form and without such a firm deadline. We face tournament-like choices and perhaps we will not end up doing the right thing.
...Brexit would likely cost the U.K. about 2 percent of GDP, a fair estimate in my view. But that is not the only thing at stake here. Humanity is on trial — more specifically, its collective decision-making capacity — and it is the U.K. standing in the dock. 
I guess I have a different view of the merits and defects of democracy. My view is somewhat like the famous Churchill quote, "democracy is the worst form of Government. Except for all those other forms."

Democracy does not give us speedy technocratically optimal solutions to complex questions revolving around 2 percentage points of GDP. Democracy, and US democracy in particular, serves one great purpose -- to guard against tyranny. That's what the US colonists were upset about, not the fine points of tariff treaties. US and UK Democracy, when paired with the complex web of checks and balances and rule of law protections and constitutions and so forth, has been pretty good at throwing the bums out before they get too big for their britches. At least it has done so better than any other system.

2 percentage points of GDP? Inability to tackle long run issues? Let's just think of some of Democracy's immense failures that put the Brexit muddle to shame. The US was unable to resolve slavery, for nearly 100 years, without civil war. Democracies dithered in the 1930s and appeased Hitler.  The scar of Vietnam  is still festering in US polarization today. On the continent, when France stood for democracy and Germany for autocracy, France's defense decisions failed dramatically in 1914 and 1939.

And if we want to raise UK GDP by 2 percentage points, with free-market reforms, there is a lot worse than Brexit simmering on the front burner. A team from Cato and Hoover could probably raise GDP by 20 percent inside a year. If anyone would pay the slightest attention to us.

Yes, Brexit is a muddle which nobody will be happy with, until the UK decides if it really would rather remain or become a free-market beacon on the edge of the continent. But do not judge democracy on it. Democracy's errors as the mechanism for collective decision-making capacity have been far worse. And then there are the failures of all the other options.
07 Aug 11:50

Who will pay unfunded state pensions?

by noreply@blogger.com (John H. Cochrane)
Homeowners. So says a nice WSJ op-ed by Rob Arnott and Lisa Meulbroek, and a proposal by Chicago Fed Economists Thomas Haasl, Rick Matton, and Thomas Walstrum.

The latter was a modest proposal, in the Jonathan Swift tradition. Despite Crain's Chicago Business instantly labeling it "foolish," "inhumane," and "the dumbest solution yet, the first article points out its inevitability. If indeed courts will insist that benefits may not be cut, then state governments must raise taxes, and this is the only one that can do the trick.

States can try to raise income taxes. And people will move. States can try to raise business taxes. And  businesses will move. What can states tax that can't move? Only real estate. If the state drastically raises the property tax, there is no choice but to pay it. You can sell, but the new buyer will be willing to pay much less. Pay the tax slowly over time, or lose the value of the property right away in a lower price.  Either way, the owner of the property on the day the tax is announced bears the burden of paying off the pensions.

There is a an economic principle here, the "capital levy." A government in trouble has an incentive to grab existing capital, once, and promise never to do it again. The promise is important, because if people know that a capital levy is coming they won't invest (build houses). If the government can pull it off, it is a tax that does not distort decisions going forward. Of course, getting people to believe the promise and invest again after the capital levy is... well, let's say a tricky business. Governments that do it once have a tendency to do it again.

In sum, a property tax is essentially the same thing as the government grabbing half the houses and selling them off to make pension obligations. And unless a miracle happens, it is the only way out.

Update: We're there already, say Orphe Divounguy, Bryce Hill, and Joe Tabor at Illinois Policy. The bulk of recent increases in property taxes have gone to pay for pensions, not more teachers, police, etc.

Update 2: I should clarify, that I found this an interesting piece of economics more than anything else. I do not think this is the right solution, nor is it the only one. Most other countries around the world, having made unsustainable pension promises, find some way around them and reduce pensions. It happens. Some sort of federal bailout is not unthinkable either. Moreover, the suddenly announced surprise once and for all property tax increase is unlikely, see update 1. So the states are likely to reap many disincentive effects of expected increases in property and other taxes.

Finally, most importantly property tax payers vote! They are unlikely to sit still for such a mass expropriation of their wealth.
13 Jun 11:47

Cross-subsidies

by noreply@blogger.com (John H. Cochrane)
Cross-subsidies are an under-appreciated original sin of economic stagnation. To transfer money from A to B, it would usually be better to raise taxes on A and to provide vouchers or otherwise pay competitive suppliers on behalf of B. But our political system doesn't like to admit the size of government-induced transfers, so instead we force businesses to undercharge B. Since they have to cover cost, they must overcharge A. It starts as the same thing as a tax on A to subsidize B. But a cross-subsidy cannot withstand competition. Someone else can give A a better price. So our government protects A from that competition. That ruins the underlying markets, and next thing you know everyone is paying more for less.

This was the story of airlines and telephones: The government wanted to subsidize airline service to small cities, and residential landlines, especially rural. It forced companies to provide those at a loss and to cross-subsidize those losses from other customers, big city connections and long distance. But then the government had to stop competitors from undercutting the overpriced services. And as those deregulations showed, the result was inefficiency and high prices for everyone.

Health care and insurance are the screaming example today. The government wants to provide health care to poor, old, and other groups. It does not want to forthrightly raise taxes and pay for their health care in competitive markets. So it forces providers to pay less to those groups, and make it up by overcharging the rest of us. But overcharging cannot stand competition, so gradually the whole system became bloated and inefficient.

A Bloomberg article "Air Ambulances Are Flying More Patients Than Ever, and Leaving Massive Bills Behind" by  John Tozzi offers a striking illustration of the phenomenon, and much of the mindset that keeps our country from fixing it.

The story starts with the usual human-interest tale, a $45,930 bill for a 70 mile flight for a kid with a 107 degree fever.
At the heart of the dispute is a gap between what insurance will pay for the flight and what Air Methods says it must charge to keep flying. Michael Cox ... had health coverage through a plan for public employees. It paid $6,704—the amount, it says, Medicare would have paid for the trip.   
The air-ambulance industry says reimbursements from U.S. government health programs, including Medicare and Medicaid, don’t cover their expenses. Operators say they thus must ask others to pay more—and when health plans balk, patients get stuck with the tab.
Seth Myers, president of Air Evac, said that his company loses money on patients covered by Medicaid and Medicare, as well as those with no insurance. That's about 75 percent of the people it flies. 

Source: Bloomberg.com
According to a 2017 report commissioned by the Association of Air Medical Services, an industry trade group, the typical cost per flight was $10,199 in 2015, and Medicare paid only 59 percent that. 
So, I knew about cross-subsidies, but $45,950 vs. $6,704 is a lot!

OK, put your economics hats on. How can it persist that people are double and triple charged what it costs to provide any service? Why, when an emergency room puts out a call, "air ambulance needed, paying customer alert" are there not swarms of helicopters battling it out -- and in the process driving the price down to cost?


Supply is always the answer -- and the one just about everyone forgets, as in this article.

I don't know the regulation, and the article doesn't go near it, so I will hazard guesses.

a) Not just any helicopter will do. Look at any small airport. There are a lot of helicopters hanging around whose owners would jump in a flash for an uber-helicopter call that pays $45,000. So, it must be true that in every such case you have to have an air-ambulance. Which makes a lot of sense, of course -- the helicopter should have the standard kind of life-saving equipment on it. But clearly the emergency room is only going to call and allow a air ambulance.

b) Air-ambulances must be properly certified and licensed. OK, but there are still lots of people who could go in to this business, or the ones who are there could bid aggressively. That brings us to

c) I'm willing to bet part of the conditions for license is that operators must carry anyone regardless of ability to pay, and not ask any financial questions.

Competition for paying customers must be banned. Only such a ban can explain the crazy situation. If there were any way to compete for the paying customers, it would happen and the problem would evaporate.

The article comes close to confirming this suspicion.
“I fly people based on need, when a physician calls or when an ambulance calls,” he [Seth Myers] said. “We don’t know for days whether a person has the ability to pay.”
The alternative? Well, pass a tax on air ambulance rides, and use the proceeds to pay for rides for the poor or indigent. It's the same thing -- except with a tax, there needs to be no regulation or bar on competition. Or pass an income tax surcharge and do the same thing. Yes, I don't like taxes any more than you do -- but given we're going to grossly subsidize air ambulance rides, a tax and subsidy is much more efficient than banning competition and allowing an ex-post free-for-all price gouge.

The article is most revealing, I think, that neither the author nor anyone he interviews even thinks of supply. Their explanations are as usual: demand, negotiating ability, and lack of regulation. 

It is true that when faced with an emergency, a loved one needs an air ambulance and is in danger of dying, you are in a very poor position to negotiate. But supply competition should solve that problem. If you can get $45,000 for a 70 mile helicopter ride, competing helicopter companies would have representatives sitting in the emergency rooms! When you arrive at an airport at 11 pm and want a rental car, you're not in a great negotiating position either. Somehow they don't charge $45,000 then! Why not? Supply competition -- and the need to have good reputations in any business. 

The ex-post negotiation is surreal. 
For people with private insurance, short flights in an air ambulance are often followed by long battles over the bill.  
Consumer groups and insurers counter that air-ambulance companies strategically stay out of health-plan networks to maximize revenue. 
[This is an increasingly common scam. The hospital may be in network, but many emergency room teams are out of network contractors. You find out when you wake up.]
...the Cox family went through two appeals with their health plan. After they retained a lawyer, Air Methods offered to reduce their balance to $10,000 on reviewing their tax returns, bank statements, pay stubs, and a list of assets. The family decided to sue instead. [My emphasis]
“I felt like they were screening us to see just how much money they could get out of us,” Tabitha Cox said. 
You got it Mrs. Cox. On what planet do you get on a helicopter with no mention of cost, and then the operator afterwards looks at your tax returns, bank statements, pay stubs and lists of assets to figure out how much you can pay? Only universities get away with that outside of health care!

The reporter put the blame squarely on ...  wait for it... the lack of price controls and other regulations.
Favorable treatment under federal law means air-ambulance companies, unlike their counterparts on the ground, have few restrictions on what they can charge for their services. Through a quirk of the 1978 Airline Deregulation Act, air-ambulance operators are considered air carriers—similar to Delta Air Lines or American Airlines—and states have no power to put in place their own curbs. 
Air-ambulance operators’ special legal status has helped them thwart efforts to control their rates. West Virginia's legislature passed a law in 2016 capping what its employee-health plan—which covered West Cox—and its worker-compensation program would pay for air ambulances
It is a sad day in America that the average reporter, faced with insane pricing behavior, can only come up with the lack of price control and regulation as an explanation. If voters don't understand that consumer protection comes from supply competition, we cannot expect politicians to shove that enlightenment down our throats.

Does it take a genius to figure out what price controls mean? Well, medicare, medicaid and indigent people aren't about to pay the cost. So if the companies can't cover costs by looking at our tax returns and coming up with a tailored price gouge for each of us, that means less air ambulance flights. The kid with the 107 degree temperature will end up driving in rush hour traffic to the hospital that can help him. Some will die in the process. Actually, it means who "needs" an air ambulance will depend on connections.

That's the problem with negoatiation as the answer to everything. Negotiation can shift costs from one person to another, but we can't all negotiate for a better deal.

Actually, there is some supply competition -- just not competition of the sort that brings down costs for non-indigent customers. The business has grown in response to its overall profitability.
The number of aircraft grew faster than the number of patients flown. In the 1990s, each helicopter flew about 600 patients a year, on average, according to Blumen’s data. That's fallen to about 350 in the current decade, spreading the expense of keeping each helicopter at the ready among a smaller pool of patients. 
While adding helicopters has expanded the reach of emergency care, “there are fewer and fewer patients that are having to pay higher and higher charges in order to facilitate this increase in access,” Aaron D. Todd, chief executive officer of Air Methods, said on an earnings call in May of 2015, before the company was taken private. “If you ask me personally, do we need 900 air medical helicopters to serve this country, I'd say probably not,” he said. 
If there are too many helicopters for the number of patients who need them, market forces should force less-efficient operators out of business, 
Now pick up your jaw off the floor. So, the answer to inadequate supply competition is to ... reduce supply!

08 Jun 18:01

Russell Brand & Jordan B Peterson - Under the Skin #52

by Jordan B Peterson

Russell Brand talked with me in London. This was our second conversation (the first was in LA a few months ago: https://www.youtube.com/watch?v=kL61yQgdWeM). I thought we had a good discussion then, but I think this one was better. We met in London about two weeks ago (in mid May 2018). I was completely worn out by the end :)

Russell Brand - Recovery: Freedom From Our Addictions [Book & Audiobook]

UK: https://amzn.to/2IWf00A
US: http://tinyurl.com/ydcwz3kd
Australia: https://t.co/Ri1XSonD2X

Russell's YouTube Channel: http://tinyurl.com/opragcg

Russell's Under The Skin Podcast: https://www.russellbrand.com/podcasts

Production Credits:
Produced & edited Jenny May Finn / @JennyMayFinn
Sound Engineer: Oliver Cadman

Additional relevant links:

My new book: 12 Rules for Life: An Antidote to Chaos: https://jordanbpeterson.com/12-rules-for-life/

12 Rules for Life Tour: Dates, Cities and Venues: https://jordanbpeterson.com/events/

My first book: Maps of Meaning: The Architecture of Belief: https://jordanbpeterson.com/maps-of-meaning/

Dr Jordan B Peterson Website: http://jordanbpeterson.com/

Self Authoring Suite: http://selfauthoring.com/
Understand Myself personality test: http://understandmyself.com/

Blog: https://jordanbpeterson.com/blog/
Podcast: https://jordanbpeterson.com/jordan-b-peterson-podcast/
Reading List: https://jordanbpeterson.com/2017/10/great-books/
Twitter: https://twitter.com/jordanbpeterson

Patreon: https://www.patreon.com/jordanbpeterson
02 Jun 11:25

The Inevitable Lifecycle of Government Regulation Benefiting the Very Companies Whose Actions Triggered It

by admin

Step 1:  Large, high-profile company has business practice that ticks lots of people off -- e.g. Facebook slammed for selling user data to Cambridge Analytica

Step 2:  Regulation results -- e.g. European GDPR (though it predates the most recent Facebook snafu, it was triggered by similar outrages in the past we have forgotten by now so I use the more recent example)

Step 3:  Large, high-profile companies that triggered the regulation by their actions in the first place are the major beneficiaries (because they have the scale and power to comply the easiest).

GDPR, the European Union’s new privacy law, is drawing advertising money toward Google’s online-ad services and away from competitors that are straining to show they’re complying with the sweeping regulation.

The reason: the Alphabet Inc. ad giant is gathering individuals’ consent for targeted advertising at far higher rates than many competing online-ad services, early data show. That means the new law, the General Data Protection Regulation, is reinforcing—at least initially—the strength of the biggest online-ad players, led by Google and Facebook Inc.

This is utterly predictable, so much so that many folks were predicting exactly this outcome months ago.

My "favorite" example of this phenomenon is toy regulation that was triggered a decade ago by a massive scandal and subsequent recall by toy giant Mattel of toys with lead paint sourced from China.

Remember the sloppily written "for the children" toy testing law that went into effect last year? The Consumer Product Safety Improvement Act (CPSIA) requires third-party testing of nearly every object intended for a child's use, and was passed in response to several toy recalls in 2007 for lead and other chemicals. Six of those recalls were on toys made by Mattel, or its subsidiary Fisher Price.

Small toymakers were blindsided by the expensive requirement, which made no exception for small domestic companies working with materials that posed no threat. Makers of books, jewelry, and clothes for kids were also caught in the net. Enforcement of the law was delayed by a year—that grace period ended last week—and many particular exceptions have been carved out, but despite an outcry, there has been no wholesale re-evaluation of the law. Once might think that large toy manufacturers would have made common cause with the little guys begging for mercy. After all, Mattel also stood to gain if the law was repealed, right?

Turns out, when Mattel got lemons, it decided to make lead-tainted lemonade (leadonade?). As luck would have it, Mattel already operates several of its own toy testing labs, including those in Mexico, China, Malaysia, Indonesia and California.

So while most small toymakers had no idea this law was coming down the pike until it was too late, Mattel spent $1 million lobbying for a little provision to be included in the CPSIA permitting companies to test their own toys in "firewalled" labs that have won Consumer Product Safety Commission approval.

The million bucks was well spent, as Mattel gained approval late last week to test its own toys in the sites listed above—just as the window for delayed enforcement closed.

Instead of winding up hurting, Mattel now has a cost advantage on mandatory testing, and a handy new government-sponsored barrier to entry for its competitors.

31 May 11:18

How Scrum disempowers developers (and destroys agile)

Last time, we looked at how Scrum has become the de facto definition of Agile, and how it is uniquely lacking in technical practices compared to the other methodologies that were involved in creating the Agile manifesto.

This lack of technical good practices and craft causes some obvious deficiencies, but first I want to look at some of the second-order effects. Starting where we left off last time, one of the problems with Scrum is that product has an owner, the Scrum process itself has a master, but no-one is empowered to advocate for development priorities. (This is part two of my "How Scrum destroyed Agile" series; part one is here.)`

The Product Owner in a scrum is typically someone from the business or product side of the business. They are not there to advocate for technical priorities; and indeed, this is fair enough, because someone needs to advocate for business and customer value. Scrum intends them to be “responsible” and “accountable” for the product backlog, and therefore to work with others to create backlog items, but in practice, they are typically the sole person creating the backlog. So, they are setting the direction of development, but without any thought as to practicalities of the development process.

Then we have the Scrum master. The ideal Scrum master would be someone with technical and product experience, respected in the company as a leader, willing to work as a servant, and outside of the technical or product hierarchy. Unfortunately, the Scrum master is typically someone with limited experience, two days training, who is not respected as a leader but only considered to be a servant, and often in the product part of the organisation (as that is where project management has sat previously.) Often they are even managed by the product owner, or share a boss with the product owner.

Finally we have this self-organised, cross-functional, non-hierarchical, essentially amorphous development team. They are meant to be self organising, with no one telling them how to build the product backlog. However, they have limited or no say over what is top of the product backlog, pressure to deliver something sprint-after-sprint, and no-one with the authority to balance the product owner and advocate for developing with higher quality.

In fact, the whole way Scrum views the development team is completely flawed given how real organisations operate. Quoting from the Scrum guide again:

A note on hierarchy

Disgruntled chorus from the peanut gallery : But Scrum (and agile) is meant to be all non-hierarchical, man.

Sure, it is. But we're looking at real companies, as they exist and are widely understood and managed, and not fantasy constructs.

Disgruntled chorus: But what about the companies where it works?

Well, github was one example, until they realised that they weren't actually shipping things users wanted and their employees weren't actually that happy; holocracy is now out, in are managers.

Valve is the other famous poster-child of radical freedom. And just as soon as they release Half Life Episode 3, I'll be happy to take lessons from them on getting software delivered.

  • They are self-organizing. No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality;
  • Development Teams are cross-functional, with all the skills as a team necessary to create a product Increment;
  • Scrum recognizes no titles for Development Team members, regardless of the work being performed by the person;
  • Scrum recognizes no sub-teams in the Development Team, regardless of domains that need to be addressed like testing, architecture, operations, or business analysis; and,
  • Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as a whole.

The Development Team, The Scrum Guide

The problem with this structure is how it actually interacts with the real organisation structures of a company. Firstly, a self-organising development team should include the product owner in order to find the most expedient solutions to the customers problems; but instead the product owner often works alone and the development team simply receives a stream of backlog items that need to somehow be brought into a cohesive whole. The items promoted in the backlog, combined with the schedule pressure set by the product owner and the business mean that the Development Team often are told how to create releases: quickly and exactly how we said. The lack of technical craft or development control means each sprint is launched as soon as possible rather than as soon as prudent, refactoring isn't done, and technical debt builds up, eventually choking the product.

This has been exacerbated since the agile manifesto because how software is delivered has changed. Instead of yearly or quarterly releases on CD or maybe a download via dialup, we now have webapps being deployed potentially daily or hourly. These endless changes mean there is no natural cadence, so Scrum’s sprint-sprint-sprint means problems pile up-up-up.

As mentioned above, the lack of titles or specific responsibilities in the development team means no-one is empowered to advocate for development priorities, and also contradicts the typical implementation where there certainly are titles and hierarchies within teams. Similarly, not recognising sub-teams doesn’t mean they don’t exist.

Finally, accountability belongs to the development team as a whole, but how is this responsibility actually delivered? Shared responsibilities typical lead to a lack of responsibility. Pressure is put on the higher performing members of the team to deliver more in order to compensate for weaker members and deliver for the team, but this inhibits training and cross-functional work which is at the heart of an effective team.

Now, some of these problems can be and are ameliorated in good implementations of Scrum through good Scrum masters and product owners working with technical leaders. But this is due to people breaking Scrum to work effectively, rather than Scrum working well on its own. And in order to break the rules you have to be confident in them, why they are there, and how they can be varied to suit the needs of a particular team. This can only happen if Scrum masters are highly qualified leaders, and yet many "Scrum masters™" have two days of training in agile leadership.

The invention of the two day Scrum master training course is probably one of the worst things Scrum has done to agile. If you look at responsibilities, a good scrum master needs to be a strong technical manager with a huge grasp of organisational change, but the role is often fulfilled by a non-technical person with limited management experience from the product side of the organisation who cannot fulfil all of those responsibilities. And the idea that two days of training is sufficient to perfect and advocate major organisational change is laughable. (Indeed, most decent training companies would agree with this, and have plenty more training to sell you.)

The Boss: "I've never managed marketing people before. But a good manager can manage anything." "So...I order you to go do marketing things...like segmenting and focus groups..." "And keep focusing and segmenting until we dominate the industry!!!" Worker: "Well, I'm motivated."

This Dilbert strip on the management fads from 1994 could be applied to Scrum any time. A “good manager can manage anything” has become a “good scrum master can scrum anything”.

Dilbert (7th October 1994)

In conclusion, instead of building up the management experience and product understanding of the development team, Scrum hives these functions off, giving the developers much of the responsibility but little of the power. The Scrum master who should be a technical and people leader is sadly often a project manager with a very limited amount of training.

And what little independence the development team are meant to have, is taken away by the process that is imposed. No one is meant to tell the development team how to turn Product Backlog into Increments, but apparently it has to involve something called “Jira”. Next time we’ll look at how Scrum’s inflexible systems and immutable rules stand opposed to valuing "Individuals and interactions over processes and tools".

Agree? Disagree? Comment on Hacker News or Twitter.

Made with the new Google Sites, an effortless way to create beautiful sites.

Report abuse

14 May 18:29

Critical PGP Vulnerability

by Bruce Schneier

EFF is reporting that a critical vulnerability has been discovered in PGP and S/MIME. No details have been published yet, but one of the researchers wrote:

We'll publish critical vulnerabilities in PGP/GPG and S/MIME email encryption on 2018-05-15 07:00 UTC. They might reveal the plaintext of encrypted emails, including encrypted emails sent in the past. There are currently no reliable fixes for the vulnerability. If you use PGP/GPG or S/MIME for very sensitive communication, you should disable it in your email client for now.

This sounds like a protocol vulnerability, but we'll learn more tomorrow.

News articles.

03 May 08:24

DB warns of US debt crisis.

by noreply@blogger.com (John H. Cochrane)
"A coming debt crisis in the US?" warns a Deutsche Bank report* by Quinn Brody and Torsten Slok.

Source: DB
This graph is gorgeous. US deficits have, historically, been driven overwhelmingly by the state of the business cycle, and have very little to do with tax policies and spending decisions that dominate press coverage. In booms, income rises, so tax rate times income rises. In busts, the opposite, plus "automatic stabilizer" spending kicks in.

Until now.

There is a good reason past deficits did not really spook markets. They understood the deficit was a temporary phenomenon, due to temporary poor demand-side economic performance. We do not have that excuse now.

In case you thought this was some alarmist crank sheet, the report starts by quoting the latest CBO
report:
the CBO argues that, assuming current policies and trends are not changed, “the likelihood of a fiscal crisis in the United States would increase. There would be a greater risk that investors would become unwilling to finance the government’s borrowing unless they were compensated with very high interest rates.” 


Other countries running big debts and deficits, like Japan and currently China, also are running big trade surpluses. That means they are, as a countries, accumulating foreign assets. The US by contrast is accumulating foreign debts. DB dares to ask the question:
Historically, twin deficits have been considered a source of macroeconomic risk, including downward pressure on the exchange rate and upward pressure on interest rates. Over the last several decades, many emerging market countries have experienced severe crises and recessions when their external financing became stressed or reversed (Mexico 1994, Thailand 1997, Argentina 2002, etc.). Given these experiences, it is relevant to ask if the US could also have such an EM-style debt crisis.
It's not as bad as it looks. The US is essentially the world's biggest hedge fund, borrowing abroad to invest in risky projects abroad, and we earn the premium on doing it. But overall, we are still borrowing to finance a trade deficit.

Like me, the DB report still sees a US debt crisis as a fairly remote possibility. Still not all their reassurance is reassuring if you think about it.
There are some good reasons why our model overstates the risks of an EM-style debt crisis. Most importantly, the US exclusively borrows in its own currency, while the model includes countries that have been exposed by borrowing abroad;
Borrowing in your own currency only means that our government can substitute inflation and devaluation for explicit default, if it refuses to fix its finances.
the US has scope to raise additional revenues (its overall tax rate of 26% of GDP in 2016 is below the OECD average of 34%);
That number looks suspiciously low -- I don't think it has federal, state, and local and all taxes in it. At all levels we're spending north of 40% of GDP. And raising a lot more revenue would mean middle class taxes like a VAT. Finally, debt crises are choices, and the main issue is really whether our government will raise nearly 10% of GDP in taxes to fund entitlements, reform the entitlements, or let the country drift to crisis.
the US dollar is the de facto global reserve currency.
This last point is significant. Figure 12 shows that almost two thirds of global official reserve assets are held in US dollars. One out of every four dollars lent to the US Treasury comes from the foreign official sector. These institutions need a safe, deep, and liquid place to park their reserves. 
That our debt is currently held as reserves by foreign official sectors with the above-stated need should not be quite so reassuring. It is a source of one-time demand for our debt, not for eternal expansion of that debt. Those are also "hot money" investors. A demand for safety can evaporate pretty quickly if everyone starts to worry about a dollar crash.
The appeal of Treasuries is further boosted by the US’s military strength, the nation’s cultural appeal, and strong domestic institutions.
I'm delighted that anyone feels that way about the US right now, especially the latter. Doubts may already be starting
...Treasuries tend to rally in episodes of market stress, even when US economic growth slowed sharply in 2008 or when China devalued its currency and signaled potential selling of its Treasury holdings in 2015. This is not happening today, which is why investors need to pay attention to whether an EM-style debt crisis is about to play out.
DB also cites a nicely fiscal-theoretic prior analysis that the 70s inflation was led by fiscal, not just monetary, troubles:
As we wrote earlier this year (see: 2018-02-22 US Economic Perspectives), a similar pro-cyclical fiscal policy was deployed in the 1960s and resulted in higher inflation. The magnitude of the divergence is set to be even more severe in the current episode.
The report concludes with a number of technical indications that demand is softening for U.S. treasuries, just as we are starting to issue a boatload of them. Short duration, meaning a huge amount is rolled over; softening foreign purchases, expectations of more devaluation meaning our apparently high yields aren't so high, and declining bid to cover ratios.

My candidate for best figure caption ever:


Like DB, I agree it's not imminent.  It will need a precipitating event like a recession, war, or crisis. Except that when it is imminent it already happened.

The conclusion is sensible
The world needs safe, liquid assets. Historically, this need has been filled by Treasuries- and it still is. Demand has thus far been inelastic [sic] despite the increase in supply (Figure 19). Treasuries have rallied for 30 years, rates continue to slide lower, and the stock of debt continues to expand. Eventually, however this will become unsustainable. We cannot say exactly what level of debt (85% of GDP? 100%? 125%?) will prove to be the tipping point, but we do believe that the latest fiscal developments have increased the odds of a crisis. Investors should continue to monitor Treasury auction developments and will remain alert to any indications of softening demand.

([sic] because selling a lot at a constant price is elastic, not inelastic.)

(*Alas, the report, of a type previously public, is only available to DB customers. Hilariously, this secrecy is, according to DB, mandated by the European Mifid II regulation, which is supposed to "increase transparency."  )

Update: Daniel Nevin's chart


15 Apr 15:00

The Case Against Education

by TheZvi

Previously: Something Was Wrong, Book Review: The Elephant in the Brain

Previously (Compass Rose): The Order of the Soul

Epistemic Status: No, seriously. Also literally.

They sentenced me to twenty years of boredom

for trying to change the system from within

I’m coming now I’m coming to reward them

First we take Manhattan, then we take Berlin

— Leonard Cohen, First We Take Manhattan

This was originally going to be my review of Bryan Caplan’s excellent new book, The Case Against Education. I was going to go over lots of interesting points where our ways of thinking differ. Instead, the introduction got a little sidetracked, so that worthy post will have to wait a bit.

First, we have the case against education.

As in: I See No Education Here.

I

What is school?

Eliezer Yudkowsky knows, but is soft peddling (from Inadequate Equilibria):

To paraphrase a commenter on Slate Star Codex: suppose that there’s a magical tower that only people with IQs of at least 100 and some amount of conscientiousness can enter, and this magical tower slices four years off your lifespan. The natural next thing that happens is that employers start to prefer prospective employees who have proved they can enter the tower, and employers offer these employees higher salaries, or even make entering the tower a condition of being employed at all.5

Anyway: the natural next thing that happens is that employers start to demand that prospective employees show a certificate saying that they’ve been inside the tower. This makes everyone want to go to the tower, which enables somebody to set up a fence around the tower and charge hundreds of thousands of dollars to let people in.6

Rick (of Rick and Morty) knows:

 

 

Nassim Talib knows (quote is from Skin in the Game):

The curse of modernity is that we are increasingly populated by a class of people who are better at explaining than understanding, or better at explaining than doing. So learning isn’t quite what we teach inmates inside the high-security prisons called schools.

Talib considers this fact – that school is a prison – so obvious he tosses it out as an off-hand remark with no explanation. He’s right. If you’re looking at a classroom, you too know. Something Was Wrong. This isn’t a playground designed to teach useful knowledge and inspire creativity. It is a prison where we learn to guess the teacher’s password and destroy creativity.

Robin Hanson knows: School is to submit. Signal submission. Submit to a life of signaling, obeying, being conscientious and conformist.

This cancer has taken our childhoods entirely. Often the rest of our lives as well. It  replaces our hopes and dreams with hopes of survival via official approval and dreams of showing up naked to algebra class. Enough school so cripples your life, between losing time and being saddled with debt, that it severely damages your ability to have children. To get our children into slightly less dystopian prisons, we bid up adjacent housing and hire coaches and tutors to fill our kids’ every hour with the explicit aim of better test and admission results rather than knowledge. Then college shows up and takes everything we have left and more, with a 100% marginal tax rate.

School takes more than all of our money.

In exchange we learn little that we retain. Little of that is useful. Most of the useful stuff – writing, reading, basic math – we would have learned anyway.

In grade school I would often fake illness to get a day of solitary confinement in my room, where I could read books and listen to public radio. Also known as getting an education. I learned far more on those days.

In high school, I went to the hardest-to-get-into school in New York City. I had a great ‘zero’ period when I would do math competitions because I enjoyed them, and a great after school because I’d run off and play games. In between was torture. Literal clock watching. I spent history class correcting the teachers. I tried to take advanced placement classes, and they wouldn’t let me because my grades at boring classes weren’t high enough. So I learned I could take the AP tests anyway, which I did.

I actually entitled my big English class project “get me out of here” and no one batted an eye. 

For college, I majored in mathematics (STEM!) at a well-respected institution. I work with numbers constantly. I have never, not once, used any of that math  for any purpose.

I was intentionally taught to write badly and read badly. I learned non-awful writing by writing online. “Appreciation” classes turned me off music, art and literature. If you compared what I got out of one statistics course (in which I mostly learned from studying a textbook) to what I learned from the rest of my college classes combinedand asked which has proven more valuable, I’m not sure which side wins.

I took one graduate math class, in analysis. The remember three things. One is that they asked us to note on our final exams if we were undergraduates, so they could pass us. The other is that the class consisted, entirely and literally, of a man with a thick, semi-understandable Russian accent copying his notes onto the board, while saying the thing he was copying onto the board.

The third thing is that it was the most valuable class I ever took, because it saved me from graduate school. Thanks, professor!

II

Is our children learning?

Bryan has the data. Ignore Bryan’s data for now. Read and actually pay attention to Scott Alexander’s recent two posts on the DC public school system.

Instead of asking Scott’s question – why are DC’s graduation rates so low? – ask the question what the hell are these things called ‘high schools’ and what are we doing to the children we put inside them?

I know what we’re not doing. Teaching them to read, write or do arithmetic. That’s clear.

Instead? Fraud. We pretend to teach, they pretend to learn. Or rather, we tried that, and they couldn’t even pretend to learn, so we resorted to massive fraud and plain old not even testing the kids at all. We pretend to teach, and we pretend they pretended to learn.

We can’t even do massive fraud and really low standards right. Massive quantities of students fail anyway, barred from earning a living. Nice system.

Pretending the kids pretended to learn doesn’t work. Why? School isn’t about learning. It’s a prison. The ‘test’ is to be in your appointed cell at the appointed time, every time. Because it’s a prison. We don’t care if the kids can read, write or add. We care if they get credit for time served.

Bizzolt writes:

DC Public Schools HS teacher here (although I’m not returning next year, as is the case with many of my colleagues). As noted, one of the biggest factors in the graduation rates is the unexcused absences–if you look at the results of our external audit and investigation here, you see that for many schools, a significant number of our seniors “Passed Despite Excessive Absences in Regular Instruction Courses Required for Graduation”–over 40% of 2017 graduates at my high school, for example.

So the attendance policy is being strictly enforced now, and you can see how from that alone, a ~30% drop in expected graduates is possible. Some more details about strictly enforcing the attendance policy though:

1: DCPS has what’s called the ’80 20′ rule: A student that is absent for at least 20% of their classes is considered absent for the whole day.
2: Most schools have 5 periods, so an absence in one class would be considered an absence for the whole day.
3: If you have 10 or more unexcused absences in a class, you automatically get an F for the term.
4: If you are over 15 minutes late for a class, that is considered an unexcused absence.
5: A majority of these absences are in first period.
6: A majority of students in my school and many others live in single parent households.
7: These students are typically responsible for making sure their younger siblings get to school, if they have any.
8: Elementary and middle schools in my neighborhood start at the exact same time as high school.
9: Their doors do not open until 5 to 10 minutes before the starting bell, presumably for safety reasons.
10: Refer to point 4.

There’s many other problems at DCPS to be sure, but this set of circumstances alone is causing the largest increase in failing grades and graduation ineligibility at my high school, and basically every other 90+% black school in the district. You could see how this accounts for quite a bit of the difference between white and black graduation rates as well. There’s a reason why across the board, DCPS schools were not strictly enforcing this policy in previous years.

Fifteen minutes late to unnaturally early class so you could take a sibling to their unnaturally early class? You missed the whole day. Do that ten times in a term? We ruin your life. For want of two and a half hours.

I have no idea how one can see this, and present a human capital model of school with a straight face.

The signaling model is optimistic. It thinks students signal to employers, rather than politicians and administrators signaling to and stealing from voters.

III

Bryan Caplan’s economist hat is permanently glued to his forehead. So he sees school not as a genocidal dystopian soul-crushing nightmare of universal incarceration, but merely a colossal waste of time and money. He looks at the economic costs and benefits,  compares signaling explanations to human capital ones, and calculates when and for whom school is worthwhile. Worthwhile for which individuals, for their private benefit? Worthwhile to what extent for society, as a public good?

Reading The Case Against Education is to watch Bryan think. Bryan goes argument by argument, consideration by consideration, to consider the true costs and benefits of formal education.

At each step, you see the questions he asks, the way he sets up the problems, examines data, considers hypothesis and reaches conclusions. He acts like someone trying to discover how things work, sorting through what he knows and considering what the world would look like if it worked in different ways. You get a book about education, but you also get an education, where it counts – the question of how to think.

Bryan lives the virtue of local validity. This is super important; when Eliezer Yudkowsky calls it the key to sanity and civilization, he’s not kidding.

Because we get to watch Bryan think, we get tons of places where he and I think very differently. Many of them are worth examining in detail. There’s a lot of data that’s difficult to interpret, and questions without clear answers. Often Bryan is extremely generous to education’s case, and shows even generous assumptions are insufficient. Other times, Bryan’s logic leads him to be overly harsh. I got the distinct sense that Bryan would have been very happy to have been proven wrong. We get a consideration of education, its pros and its cons, as Bryan sees them – an explorer, rather than an advocate.

Overall, what does Bryan find? Time and again, Bryan finds that the signaling model of education fits the facts, and the human capital argument does not fit the facts. His arguments are convincing.

Bryan concludes that if you take what you’ve read and experienced and shut up and multiply, no matter how generous you are to school’s cause, you will find that social returns to schooling are remarkably terrible.

That’s most of the human capital you get from school anyway: Reading, writing, basic math and shutting up. You get selfish returns to school by signaling conformity, conscientiousness and intelligence. To not follow the standard procedure for signaling conformity and conscientiousness is to signal their opposites, so we’re caught in an increasingly expensive signaling trap we can’t escape.

Bryan then bites quite the bullet:

Most critics of our education system complain we aren’t spending our money in the right way, or that preachers in teachers’ clothing are leading our nation’s children down dark paths. While I semi-sympathize, these critics miss what I see as our educational system’s supreme defect: there’s way too much education.

He means there’s way too much formal education. I don’t think Bryan thinks people spend too little time learning about the world or acquiring skills! He thinks they do so via other, far superior paths, where they remember what they learn and what they learn is valuable.

People don’t know things. People need skills. It’s a problem. School doesn’t solve the problem, it exacerbates it.

Bryan’s proposed remedy is the separation of school and state. At times he flirts with going farther, and taxing school, but recoils. We don’t really want to discourage school the way we discourage, say, income. Do we?

11 Mar 10:26

Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink”

by Rob Bensinger
Waking Up with Sam Harris

MIRI senior researcher Eliezer Yudkowsky was recently invited to be a guest on Sam Harris’ “Waking Up” podcast. Sam is a neuroscientist and popular author who writes on topics related to philosophy, religion, and public discourse.

The following is a complete transcript of Sam and Eliezer’s conversation, AI: Racing Toward the Brink.

Contents

 

1. Intelligence and generality (0:05:26)


Sam Harris: I am here with Eliezer Yudkowsky. Eliezer, thanks for coming on the podcast.

Eliezer Yudkowsky: You’re quite welcome. It’s an honor to be here.

Sam: You have been a much requested guest over the years. You have quite the cult following, for obvious reasons. For those who are not familiar with your work, they will understand the reasons once we get into talking about things. But you’ve also been very present online as a blogger. I don’t know if you’re still blogging a lot, but let’s just summarize your background for a bit and then tell people what you have been doing intellectually for the last twenty years or so.

Eliezer: I would describe myself as a decision theorist. A lot of other people would say that I’m in artificial intelligence, and in particular in the theory of how to make sufficiently advanced artificial intelligences that do a particular thing and don’t destroy the world as a side-effect. I would call that “AI alignment,” following Stuart Russell.

Other people would call that “AI control,” or “AI safety,” or “AI risk,” none of which are terms that I really like.

I also have an important sideline in the art of human rationality: the way of achieving the map that reflects the territory and figuring out how to navigate reality to where you want it to go, from a probability theory / decision theory / cognitive biases perspective. I wrote two or three years of blog posts, one a day, on that, and it was collected into a book called Rationality: From AI to Zombies.

Sam: Which I’ve read, and which is really worth reading. You have a very clear and aphoristic way of writing; it’s really quite wonderful. I highly recommend that book.

Eliezer: Thank you, thank you.

Sam: Your background is unconventional. For instance, you did not go to high school, correct? Let alone college or graduate school. Summarize that for us.

Eliezer: The system didn’t fit me that well, and I’m good at self-teaching. I guess when I started out I thought I was going to go into something like evolutionary psychology or possibly neuroscience, and then I discovered probability theory, statistics, decision theory, and came to specialize in that more and more over the years.

Sam: How did you not wind up going to high school? What was that decision like?

Eliezer: Sort of like a mental crash around the time I hit puberty—or like a physical crash, even. I just did not have the stamina to make it through a whole day of classes at the time. (laughs) I’m not sure how well I’d do trying to go to high school now, honestly. But it was clear that I could self-teach, so that’s what I did.

Sam: And where did you grow up?

Eliezer: Chicago, Illinois.

Sam: Let’s fast forward to the center of the bull’s eye for your intellectual life here. You have a new book out, which we’ll talk about second. Your new book is Inadequate Equilibria: Where and How Civilizations Get Stuck. Unfortunately, I’ve only read half of that, which I’m also enjoying. I’ve certainly read enough to start a conversation on that. But we should start with artificial intelligence, because it’s a topic that I’ve touched a bunch on in the podcast which you have strong opinions about, and it’s really how we came together. You and I first met at that conference in Puerto Rico, which was the first of these AI safety / alignment discussions that I was aware of. I’m sure there have been others, but that was a pretty interesting gathering.

So let’s talk about AI and the possible problem with where we’re headed, and the near-term problem that many people in the field and at the periphery of the field don’t seem to take the problem (as we conceive it) seriously. Let’s just start with the basic picture and define some terms. I suppose we should define “intelligence” first, and then jump into the differences between strong and weak or general versus narrow AI. Do you want to start us off on that?

Eliezer: Sure. Preamble disclaimer, though: In the field in general, not everyone you ask would give you the same definition of intelligence. A lot of times in cases like those it’s good to sort of go back to observational basics. We know that in a certain way, human beings seem a lot more competent than chimpanzees, which seems to be a similar dimension to the one where chimpanzees are more competent than mice, or that mice are more competent than spiders. People have tried various theories about what this dimension is, they’ve tried various definitions of it. But if you went back a few centuries and asked somebody to define “fire,” the less wise ones would say: “Ah, fire is the release of phlogiston. Fire is one of the four elements.” And the truly wise ones would say, “Well, fire is the sort of orangey bright hot stuff that comes out of wood and spreads along wood.” They would tell you what it looked like, and put that prior to their theories of what it was.

So what this mysterious thing looks like is that humans can build space shuttles and go to the Moon, and mice can’t, and we think it has something to do with our brains.

Sam: Yeah. I think we can make it more abstract than that. Tell me if you think this is not generic enough to be accepted by most people in the field: Whatever intelligence may be in specific contexts, generally speaking it’s the ability to meet goals, perhaps across a diverse range of environments. We might want to add that it’s at least implicit in the “intelligence” that interests us that it means an ability to do this flexibly, rather than by rote following the same strategy again and again blindly. Does that seem like a reasonable starting point?

Eliezer: I think that that would get fairly widespread agreement, and it matches up well with some of the things that are in AI textbooks.

If I’m allowed to take it a bit further and begin injecting my own viewpoint into it, I would refine it and say that by “achieve goals” we mean something like “squeezing the measure of possible futures higher in your preference ordering.” If we took all the possible outcomes, and we ranked them from the ones you like least to the ones you like most, then as you achieve your goals, you’re sort of squeezing the outcomes higher in your preference ordering. You’re narrowing down what the outcome would be to be something more like what you want, even though you might not be able to narrow it down very exactly.

Flexibility. Generality. Humans are much more domaingeneral than mice. Bees build hives; beavers build dams; a human will look over both of them and envision a honeycomb-structured dam. We are able to operate even on the Moon, which is very unlike the environment where we evolved.

In fact, our only competitor in terms of general optimization—where “optimization” is that sort of narrowing of the future that I talked about—is natural selection. Natural selection built beavers. It built bees. It sort of implicitly built the spider’s web, in the course of building spiders.

We as humans have this similar very broad range to handle this huge variety of problems. And the key to that is our ability to learn things that natural selection did not preprogram us with; so learning is the key to generality. (I expect that not many people in AI would disagree with that part either.)

Sam: Right. So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we’re talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like making paperclips—these are not systems that will spontaneously decide that they could be doing more enlightened things than (say) making paperclips.

This moves to the question of how deeply unfamiliar artificial intelligence might be, because there are no natural goals that will arrive in these systems apart from the ones we put in there. And we have common-sense intuitions that make it very difficult for us to think about how strange an artificial intelligence could be. Even one that becomes more and more competent to meet its goals.

Let’s talk about the frontiers of strangeness in AI as we move from here. Again, though, I think we have a couple more definitions we should probably put in play here, differentiating strong and weak or general and narrow intelligence.

Eliezer: Well, to differentiate “general” and “narrow” I would say that this is on the one hand theoretically a spectrum, and on the other hand, there seems to have been a very sharp jump in generality between chimpanzees and humans.

So, breadth of domain driven by breadth of learning—DeepMind, for example, recently built AlphaGo, and I lost some money betting that AlphaGo would not defeat the human champion, which it promptly did. Then a successor to that was AlphaZero. AlphaGo was specialized on Go; it could learn to play Go better than its starting point for playing Go, but it couldn’t learn to do anything else. Then they simplified the architecture for AlphaGo. They figured out ways to do all the things it was doing in more and more general ways. They discarded the opening book—all the human experience of Go that was built into it. They were able to discard all of these programmatic special features that detected features of the Go board. They figured out how to do that in simpler ways, and because they figured out how to do it in simpler ways, they were able to generalize to AlphaZero, which learned how to play chess using the same architecture. They took a single AI and got it to learn Go, and then reran it and made it learn chess. Now that’s not human general, but it’s a step forward in generality of the sort that we’re talking about.

Sam: Am I right in thinking that that’s a pretty enormous breakthrough? I mean, there’s two things here. There’s the step to that degree of generality, but there’s also the fact that they built a Go engine—I forget if it was Go or chess or both—which basically surpassed all of the specialized AIs on those games over the course of a day. Isn’t the chess engine of AlphaZero better than any dedicated chess computer ever, and didn’t it achieve that with astonishing speed?

Eliezer: Well, there was actually some amount of debate afterwards whether or not the version of the chess engine that it was tested against was truly optimal. But even to the extent that it was in that narrow range of the best existing chess engines, as Max Tegmark put it, the real story wasn’t in how AlphaGo beat human Go players. It’s in how AlphaZero beat human Go system programmers and human chess system programmers. People had put years and years of effort into accreting all of the special-purpose code that would play chess well and efficiently, and then AlphaZero blew up to (and possibly past) that point in a day. And if it hasn’t already gone past it, well, it would be past it by now if DeepMind kept working it. Although they’ve now basically declared victory and shut down that project, as I understand it.

Sam: So talk about the distinction between general and narrow intelligence a little bit more. We have this feature of our minds, most conspicuously, where we’re general problem-solvers. We can learn new things and our learning in one area doesn’t require a fundamental rewriting of our code. Our knowledge in one area isn’t so brittle as to be degraded by our acquiring knowledge in some new area, or at least this is not a general problem which erodes our understanding again and again. And we don’t yet have computers that can do this, but we’re seeing the signs of moving in that direction. And so it’s often imagined that there is a kind of near-term goal—which has always struck me as a mirage—of so-called “human-level” general AI.

I don’t see how that phrase will ever mean much of anything, given that all of the narrow AI we’ve built thus far is superhuman within the domain of its applications. The calculator in my phone is superhuman for arithmetic. Any general AI that also has my phone’s ability to calculate will be superhuman for arithmetic. But we must presume it will be superhuman for all of the dozens or hundreds of specific human talents we’ve put into it, whether it’s facial recognition or just memory, unless we decide to consciously degrade it. Access to the world’s data will be superhuman unless we isolate it from data. Do you see this notion of human-level AI as a landmark on the timeline of our development, or is it just never going to be reached?

Eliezer: I think that a lot of people in the field would agree that human-level AI, defined as “literally at the human level, neither above nor below, across a wide range of competencies,” is a straw target, is an impossible mirage. Right now it seems like AI is clearly dumber and less general than us—or rather that if we’re put into a real-world, lots-of-things-going-on context that places demands on generality, then AIs are not really in the game yet. Humans are clearly way ahead. And more controversially, I would say that we can imagine a state where the AI is clearly way ahead across every kind of cognitive competency, barring some very narrow ones that aren’t deeply influential of the others.

Like, maybe chimpanzees are better at using a stick to draw ants from an ant hive and eat them than humans are. (Though no humans have practiced that to world championship level.) But there’s a sort of general factor of, “How good are you at it when reality throws you a complicated problem?” At this, chimpanzees are clearly not better than humans. Humans are clearly better than chimps, even if you can manage to narrow down one thing the chimp is better at. The thing the chimp is better at doesn’t play a big role in our global economy. It’s not an input that feeds into lots of other things.

There are some people who say this is not possible—I think they’re wrong—but it seems to me that it is perfectly coherent to imagine an AI that is better at everything (or almost everything) than we are, such that if it was building an economy with lots of inputs, humans would have around the same level of input into that economy as the chimpanzees have into ours.

Sam: Yeah. So what you’re gesturing at here is a continuum of intelligence that I think most people never think about. And because they don’t think about it, they have a default doubt that it exists. This is a point I know you’ve made in your writing, and I’m sure it’s a point that Nick Bostrom made somewhere in his book Superintelligence. It’s this idea that there’s a huge blank space on the map past the most well-advertised exemplars of human brilliance, where we don’t imagine what it would be like to be five times smarter than the smartest person we could name, and we don’t even know what that would consist in, because if chimps could be given to wonder what it would be like to be five times smarter than the smartest chimp, they’re not going to represent for themselves all of the things that we’re doing that they can’t even dimly conceive.

There’s a kind of disjunction that comes with more. There’s a phrase used in military contexts. The quote is variously attributed to Stalin and Napoleon and I think Clausewitz and like a half a dozen people who have claimed this quote. The quote is, “Sometimes quantity has a quality all its own.” As you ramp up in intelligence, whatever it is at the level of information processing, spaces of inquiry and ideation and experience begin to open up, and we can’t necessarily predict what they would be from where we sit.

How do you think about this continuum of intelligence beyond what we currently know, in light of what we’re talking about?

Eliezer: Well, the unknowable is a concept you have to be very careful with. The thing you can’t figure out in the first 30 seconds of thinking about it—sometimes you can figure it out if you think for another five minutes. So in particular I think that there’s a certain narrow kind of unpredictability which does seem to be plausibly in some sense essential, which is that for AlphaGo to play better Go than the best human Go players, it must be the case that the best human Go players cannot predict exactly where on the Go board AlphaGo will play. If they could predict exactly where AlphaGo would play, AlphaGo would be no smarter than them.

On the other hand, AlphaGo’s programmers and the people who knew what AlphaGo’s programmers were trying to do, or even just the people who watched AlphaGo play, could say, “Well, I think the system is going to play such that it will win at the end of the game.” Even if they couldn’t predict exactly where it would move on the board.

Similarly, there’s a (not short, or not necessarily slam-dunk, or not immediately obvious) chain of reasoning which says that it is okay for us to reason about aligned (or even unaligned) artificial general intelligences of sufficient power as if they’re trying to do something, but we don’t necessarily know what. From our perspective that still has consequences, even though we can’t predict in advance exactly how they’re going to do it.


2. Orthogonal capabilities and goals in AI (0:25:21)


Sam: I think we should define this notion of alignment. What do you mean by “alignment,” as in the alignment problem?

Eliezer: It’s a big problem. And it does have some moral and ethical aspects, which are not as important as the technical aspects—or pardon me, they’re not as difficult as the technical aspects. They couldn’t exactly be less important.

But broadly speaking, it’s an AI where you can say what it’s trying to do. There are narrow conceptions of alignment, where you’re trying to get it to do something like cure Alzheimer’s disease without destroying the rest of the world. And there’s much more ambitious notions of alignment, where you’re trying to get it to do the right thing and achieve a happy intergalactic civilization.

But both the narrow and the ambitious alignment have in common that you’re trying to have the AI do that thing rather than making a lot of paperclips.

Sam: Right. For those who have not followed this conversation before, we should cash out this reference to “paperclips” which I made at the opening. Does this thought experiment originate with Bostrom, or did he take it from somebody else?

Eliezer: As far as I know, it’s me.

Sam: Oh, it’s you, okay.

Eliezer: It could still be Bostrom. I asked somebody, “Do you remember who it was?” and they searched through the archives of the mailing list where this idea plausibly originated and if it originated there, then I was the first one to say “paperclips.”

Sam: All right, then by all means please summarize this thought experiment for us.

Eliezer: Well, the original thing was somebody expressing a sentiment along the lines of, “Who are we to constrain the path of things smarter than us? They will create something in the future; we don’t know what it will be, but it will be very worthwhile. We shouldn’t stand in the way of that.”

The sentiments behind this are something that I have a great deal of sympathy for. I think the model of the world is wrong. I think they’re factually wrong about what happens when you take a random AI and make it much bigger.

In particular, I said, “The thing I’m worried about is that it’s going to end up with a randomly rolled utility function whose maximum happens to be a particular kind of tiny molecular shape that looks like a paperclip.” And that was the original paperclip maximizer scenario.

It got a little bit distorted in being whispered on, into the notion of: “Somebody builds a paperclip factory and the AI in charge of the paperclip factory takes over the universe and turns it all into paperclips.” There was a lovely online game about it, even. But this still sort of cuts against a couple of key points.

One is: the problem isn’t that paperclip factory AIs spontaneously wake up. Wherever the first artificial general intelligence is from, it’s going to be in a research lab specifically dedicated to doing it, for the same reason that the first airplane didn’t spontaneously assemble in a junk heap.

And the people who are doing this are not dumb enough to tell their AI to make paperclips, or make money, or end all war. These are Hollywood movie plots that the script writers do because they need a story conflict and the story conflict requires that somebody be stupid. The people at Google are not dumb enough to build an AI and tell it to make paperclips.

The problem I’m worried about is that it’s technically difficult to get the AI to have a particular goal set and keep that goal set and implement that goal set in the real world, and so what it does instead is something random—for example, making paperclips. Where “paperclips” are meant to stand in for “something that is worthless even from a very cosmopolitan perspective.” Even if we’re trying to take a very embracing view of the nice possibilities and accept that there may be things that we wouldn’t even understand, that if we did understand them we would comprehend to be of very high value, paperclips are not one of those things. No matter how long you stare at a paperclip, it still seems pretty pointless from our perspective. So that is the concern about the future being ruined, the future being lost. The future being turned into paperclips.

Sam: One thing this thought experiment does: it also cuts against the assumption that a sufficiently intelligent system, a system that is more competent than we are in some general sense, would by definition only form goals, or only be driven by a utility function, that we would recognize as being ethical, or wise, and would by definition be aligned with our better interest. That we’re not going to build something that is superhuman in competence that could be moving along some path that’s as incompatible with our wellbeing as turning every spare atom on Earth into a paperclip.

But you don’t get our common sense unless you program it into the machine, and you don’t get a guarantee of perfect alignment or perfect corrigibility (the ability for us to be able to say, “Well, that’s not what we meant, come back”) unless that is successfully built into the machine. So this alignment problem is—the general concern is that even with the seemingly best goals put in, we could build something (especially in the case of something that makes changes to itself—and we’ll talk about this, the idea that these systems could become self-improving) whose future behavior in the service of specific goals isn’t totally predictable by us. If we gave it the goal to cure Alzheimer’s, there are many things that are incompatible with it fulfilling that goal, and one of those things is our turning it off. We have to have a machine that will let us turn it off even though its primary goal is to cure Alzheimer’s.

I know I interrupted you before. You wanted to give an example of the alignment problem—but did I just say anything that you don’t agree with, or are we still on the same map?

Eliezer: We’re still on the same map. I agree with most of it. I would of course have this giant pack of careful definitions and explanations built on careful definitions and explanations to go through everything you just said. Possibly not for the best, but there it is.

Stuart Russell put it, “You can’t bring the coffee if you’re dead,” pointing out that if you have a sufficiently intelligent system whose goal is to bring you coffee, even that system has an implicit strategy of not letting you switch it off. Assuming that all you told it to do was bring the coffee.

I do think that a lot of people listening may want us to back up and talk about the question of whether you can have something that feels to them like it’s so “smart” and so “stupid” at the same time—like, is that a realizable way an intelligence can be?

Sam: Yeah. And that is one of the virtues—or one of the confusing elements, depending on where you come down on this—of this thought experiment of the paperclip maximizer.

Eliezer: Right. So, I think that there are multiple narratives about AI, and I think that the technical truth is something that doesn’t fit into any of the obvious narratives. For example, I think that there are people who have a lot of respect for intelligence, they are happy to envision an AI that is very intelligent, it seems intuitively obvious to them that this carries with it tremendous power, and at the same time, their respect for the concept of intelligence leads them to wonder at the concept of the paperclip maximizer: “Why is this very smart thing just making paperclips?”

There’s similarly another narrative which says that AI is sort of lifeless, unreflective, just does what it’s told, and to these people it’s perfectly obvious that an AI might just go on making paperclips forever. And for them the hard part of the story to swallow is the idea that machines can get that powerful.

Sam: Those are two hugely useful categories of disparagement of your thesis here.

Eliezer: I wouldn’t say disparagement. These are just initial reactions. These are people we haven’t been talking to yet.

Sam: Right, let me reboot that. Those are two hugely useful categories of doubt with respect to your thesis here, or the concerns we’re expressing, and I just want to point out that both have been put forward on this podcast. The first was by David Deutsch, the physicist, who imagines that whatever AI we build—and he certainly thinks we will build it—will be by definition an extension of us. He thinks the best analogy is to think of our future descendants. These will be our children. The teenagers of the future may have different values than we do, but these values and their proliferation will be continuous with our values and our culture and our memes. There won’t be some radical discontinuity that we need to worry about. And so there is that one basis for lack of concern: this is an extension of ourselves and it will inherit our values, improve upon our values, and there’s really no place where things reach any kind of cliff that we need to worry about.

The other non-concern you just raised was expressed by Neil deGrasse Tyson on this podcast. He says things like, “Well, if the AI starts making too many paperclips I’ll just unplug it, or I’ll take out a shotgun and shoot it”—the idea that this thing, because we made it, could be easily switched off at any point we decide it’s not working correctly. So I think it would be very useful to get your response to both of those species of doubt about the alignment problem.

Eliezer: So, a couple of preamble remarks. One is: “by definition”? We don’t care what’s true by definition here. Or as Einstein put it: insofar as the equations of mathematics are certain, they do not refer to reality, and insofar as they refer to reality, they are not certain.

Let’s say somebody says, “Men by definition are mortal. Socrates is a man. Therefore Socrates is mortal.” Okay, suppose that Socrates actually lives for a thousand years. The person goes, “Ah! Well then, by definition Socrates is not a man!”

Similarly, you could say that “by definition” a sufficiently advanced artificial intelligence is nice. And what if it isn’t nice and we see it go off and build a Dyson sphere? “Ah! Well, then by definition it wasn’t what I meant by ‘intelligent.’” Well, okay, but it’s still over there building Dyson spheres.

The first thing I’d want to say is this is an empirical question. We have a question of what certain classes of computational systems actually do when you switch them on. It can’t be settled by definitions; it can’t be settled by how you define “intelligence.”

There could be some sort of a priori truth that is deep about how if it has property A it almost certainly has property B unless the laws of physics are being violated. But this is not something you can build into how you define your terms.

Sam: Just to do justice to David Deutsch’s doubt here, I don’t think he’s saying it’s empirically impossible that we could build a system that would destroy us. It’s just that we would have to be so stupid to take that path that we are incredibly unlikely to take that path. The superintelligent systems we will build will be built with enough background concern for their safety that there is no special concern here with respect to how they might develop.

Eliezer: The next preamble I want to give is—well, maybe this sounds a bit snooty, maybe it sounds like I’m trying to take a superior vantage point—but nonetheless, my claim is not that there is a grand narrative that makes it emotionally consonant that paperclip maximizers are a thing. I’m claiming this is true for technical reasons. Like, this is true as a matter of computer science. And the question is not which of these different narratives seems to resonate most with your soul. It’s: what’s actually going to happen? What do you think you know? How do you think you know it?

The particular position that I’m defending is one that somebody—I think Nick Bostrom—named the orthogonality thesis. And the way I would phrase it is that you can have arbitrarily powerful intelligence, with no defects of that intelligence—no defects of reflectivity, it doesn’t need an elaborate special case in the code, it doesn’t need to be put together in some very weird way—that pursues arbitrary tractable goals. Including, for example, making paperclips.

The way I would put it to somebody who’s initially coming in from the first viewpoint, the viewpoint that respects intelligence and wants to know why this intelligence would be doing something so pointless, is that the thesis, the claim I’m making, that I’m going to defend is as follows.

Imagine that somebody from another dimension—the standard philosophical troll who’s always called “Omega” in the philosophy papers—comes along and offers our civilization a million dollars worth of resources per paperclip that we manufacture. If this was the challenge that we got, we could figure out how to make a lot of paperclips. We wouldn’t forget to do things like continue to harvest food so we could go on making paperclips. We wouldn’t forget to perform scientific research, so we could discover better ways of making paperclips. We would be able to come up with genuinely effective strategies for making a whole lot of paperclips.

Or similarly, for an intergalactic civilization, if Omega comes by from another dimension and says, “I’ll give you whole universes full of resources for every paperclip you make over the next thousand years,” that intergalactic civilization could intelligently figure out how to make a whole lot of paperclips to get at those resources that Omega is offering, and they wouldn’t forget how to keep the lights turned on either. And they would also understand concepts like, “If some aliens start a war with them, you’ve got to prevent the aliens from destroying you in order to go on making the paperclips.”

So the orthogonality thesis is that an intelligence that pursues paperclips for their own sake, because that’s what its utility function is, can be just as effective, as efficient, as the whole intergalactic civilization that is being paid to make paperclips. That the paperclip maximizers does not suffer any deflect of reflectivity, any defect of efficiency from needing to be put together in some weird special way to be built so as to pursue paperclips. And that’s the thing that I think is true as a matter of computer science. Not as a matter of fitting with a particular narrative; that’s just the way the dice turn out.

Sam: Right. So what is the implication of that thesis? It’s “orthogonal” with respect to what?

Eliezer: Intelligence and goals.

Sam: Not to be pedantic here, but let’s define “orthogonal” for those for whom it’s not a familiar term.

Eliezer: The original “orthogonal” means “at right angles.” If you imagine a graph with an x axis and a y axis, if things can vary freely along the x axis and freely along the y axis at the same time, that’s orthogonal. You can move in one direction that’s at right angles to another direction without affecting where you are in the first dimension.

Sam: So generally speaking, when we say that some set of concerns is orthogonal to another, it’s just that there’s no direct implication from one to the other. Some people think that facts and values are orthogonal to one another. So we can have all the facts there are to know, but that wouldn’t tell us what is good. What is good has to be pursued in some other domain. I don’t happen to agree with that, as you know, but that’s an example.

Eliezer: I don’t technically agree with it either. What I would say is that the facts are not motivating. “You can know all there is to know about what is good, and still make paperclips,” is the way I would phrase that.

Sam: I wasn’t connecting that example to the present conversation, but yeah. So in the case of the paperclip maximizer, what is orthogonal here? Intelligence is orthogonal to anything else we might think is good, right?

Eliezer: I mean, I would potentially object a little bit to the way that Nick Bostrom took the word “orthogonality” for that thesis. I think, for example, that if you have humans and you make the human smarter, this is not orthogonal to the humans’ values. It is certainly possible to have agents such that as they get smarter, what they would report as their utility functions will change. A paperclip maximizer is not one of those agents, but humans are.

Sam: Right, but if we do continue to define intelligence as an ability to meet your goals, well, then we can be agnostic as to what those goals are. You take the most intelligent person on Earth. You could imagine his evil brother who is more intelligent still, but he just has goals that we would think are bad. He could be the most brilliant psychopath ever.

Eliezer: I think that that example might be unconvincing to somebody who’s coming in with a suspicion that intelligence and values are correlated. They would be like, “Well, has that been historically true? Is this psychopath actually suffering from some defect in his brain, where you give him a pill, you fix the defect, they’re not a psychopath anymore.” I think that this sort of imaginary example is one that they might not find fully convincing for that reason.

Sam: The truth is, I’m actually one of those people, in that I do think there’s certain goals and certain things that we may become smarter and smarter with respect to, like human wellbeing. These are places where intelligence does converge with other kinds of value-laden qualities of a mind, but generally speaking, they can be kept apart for a very long time. So if you’re just talking about an ability to turn matter into useful objects or extract energy from the environment to do the same, this can be pursued with the purpose of tiling the world with paperclips, or not. And it just seems like there’s no law of nature that would prevent an intelligent system from doing that.

Eliezer: The way I would rephrase the fact/values thing is: We all know about David Hume and Hume’s Razor, the “is does not imply ought” way of looking at it. I would slightly rephrase that so as to make it more of a claim about computer science.

What Hume observed is that there are some sentences that involve an “is,” some sentences involve “ought,” and if you start from sentences that only have “is” you can’t get sentences that involve “oughts” without a ought introduction rule, or assuming some other previous “ought.” Like: it’s currently cloudy outside. That’s a statement of simple fact. Does it therefore follow that I shouldn’t go for a walk? Well, only if you previously have the generalization, “When it is cloudy, you should not go for a walk.” Everything that you might use to derive an ought would be a sentence that involves words like “better” or “should” or “preferable,” and things like that. You only get oughts from other oughts. That’s the Hume version of the thesis.

The way I would say it is that there’s a separable core of “is” questions. In other words: okay, I will let you have all of your “ought” sentences, but I’m also going to carve out this whole world full of “is” sentences that only need other “is” sentences to derive them.

Sam: I don’t even know that we need to resolve this. For instance, I think the is-ought distinction is ultimately specious, and this is something that I’ve argued about when I talk about morality and values and the connection to facts. But I can still grant that it is logically possible (and I would certainly imagine physically possible) to have a system that has a utility function that is sufficiently strange that scaling up its intelligence doesn’t get you values that we would recognize as good. It certainly doesn’t guarantee values that are compatible with our wellbeing. Whether “paperclip maximizer” is too specialized a case to motivate this conversation, there’s certainly something that we could fail to put into a superhuman AI that we really would want to put in so as to make it aligned with us.

Eliezer: I mean, the way I would phrase it is that it’s not that the paperclip maximizer has a different set of oughts, but that we can see it as running entirely on “is” questions. That’s where I was going with that. There’s this sort of intuitive way of thinking about it, which is that there’s this sort of ill-understood connection between “is” and “ought” and maybe that allows a paperclip maximizer to have a different set of oughts, a different set of things that play in its mind the role that oughts play in our mind.

Sam: But then why wouldn’t you say the same thing of us? The truth is, I actually do say the same thing of us. I think we’re running on “is” questions as well. We have an “ought”-laden way of talking about certain “is” questions, and we’re so used to it that we don’t even think they are “is” questions, but I think you can do the same analysis on a human being.

Eliezer: The question “How many paperclips result if I follow this policy?” is an “is” question. The question “What is a policy such that it leads to a very large number of paperclips?” is an “is” question. These two questions together form a paperclip maximizer. You don’t need anything else. All you need is a certain kind of system that repeatedly asks the “is” question “What leads to the greatest number of paperclips?” and then does that thing. Even if the things that we think of as “ought” questions are very complicated and disguised “is” questions that are influenced by what policy results in how many people being happy and so on.

Sam: Yeah. Well, that’s exactly the way I think about morality. I’ve been describing it as a navigation problem. We’re navigating in the space of possible experiences, and that includes everything we can care about or claim to care about. This is a consequentialist picture of the consequences of actions and ways of thinking. This is my claim: anything that you can tell me is a moral principle that is a matter of oughts and shoulds and not otherwise susceptible to a consequentialist analysis, I feel I can translate that back into a consequentialist way of speaking about facts. These are just “is” questions, just what actually happens to all the relevant minds, without remainder, and I’ve yet to find an example of somebody giving me a real moral concern that wasn’t at bottom a matter of the actual or possible consequences on conscious creatures somewhere in our light cone.

Eliezer: But that’s the sort of thing that you are built to care about. It is a fact about the kind of mind you are that, presented with these answers to these “is” questions, it hooks up to your motor output, it can cause your fingers to move, your lips to move. And a paperclip maximizer is built so as to respond to “is” questions about paperclips, not about what is right and what is good and the greatest flourishing of sentient beings and so on.

Sam: Exactly. I can well imagine that such minds could exist, and even more likely, perhaps, I can well imagine that we will build superintelligent AI that will pass the Turing Test, it will seem human to us, it will seem superhuman, because it will be so much smarter and faster than a normal human, but it will be built in a way that will resonate with us as a kind of person. I mean, it will not only recognize our emotions, because we’ll want it to—perhaps not every AI will be given these qualities, just imagine the ultimate version of the AI personal assistant. Siri becomes superhuman. We’ll want that interface to be something that’s very easy to relate to and so we’ll have a very friendly, very human-like front-end to that.

Insofar as this thing thinks faster and better thoughts than any person you’ve ever met, it will pass as superhuman, but I could well imagine that we will leave not perfectly understanding what it is to be human and what it is that will constrain our conversation with one another over the next thousand years with respect to what is good and desirable and just how many paperclips we want on our desks. We will leave something out, or we will have put in some process whereby this intelligent system can improve itself that will cause it to migrate away from some equilibrium that we actually want it to stay in so as to be compatible with our wellbeing. Again, this is the alignment problem.

First, to back up for a second, I just introduced this concept of self-improvement. The alignment problem is distinct from this additional wrinkle of building machines that can become recursively self-improving, but do you think that the self-improving prospect is the thing that really motivates this concern about alignment?

Eliezer: Well, I certainly would have been a lot more focused on self-improvement, say, ten years ago, before the modern revolution in artificial intelligence. It now seems significantly more probable an AI might need to do significantly less self-improvement before getting to the point where it’s powerful enough that we need to start worrying about alignment. AlphaZero, to take the obvious case. No, it’s not general, but if you had general AlphaZero—well, I mean, this AlphaZero got to be superhuman in the domains it was working on without understanding itself and redesigning itself in a deep way.

There’s gradient descent mechanisms built into it. There’s a system that improves another part of the system. It is reacting to its own previous plays in doing the next play. But it’s not like a human being sitting down and thinking, “Okay, how do I redesign the next generation of human beings using genetic engineering?” AlphaZero is not like that. And so it now seems more plausible that we could get into a regime where AIs can do dangerous things or useful things without having previously done a complete rewrite of themselves. Which is from my perspective a pretty interesting development.

I do think that when you have things that are very powerful and smart, they will redesign and improve themselves unless that is otherwise prevented for some reason or another. Maybe you’ve built an aligned system, and you have the ability to tell it not to self-improve quite so hard, and you asked it to not self-improve so hard so that you can understand it better. But if you lose control of the system, if you don’t understand what it’s doing and it’s very smart, it’s going to be improving itself, because why wouldn’t it? That’s one of the things you do almost no matter what your utility functions is.


3. Cognitive uncontainability and instrumental convergence (0:53:39)


Sam: Right. So I feel like we’ve addressed Deutsch’s non-concern to some degree here. I don’t think we’ve addressed Neil deGrasse Tyson so much, this intuition that you could just shut it down. This would be a good place to introduce this notion of the AI-in-a-box thought experiment.

Eliezer: (laughs)

Sam: This is something for which you are famous online. I’ll just set you up here. This is a plausible research paradigm, obviously, and in fact I would say a necessary one. Anyone who is building something that stands a chance of becoming superintelligent should be building it in a condition where it can’t get out into the wild. It’s not hooked up to the Internet, it’s not in our financial markets, doesn’t have access to everyone’s bank records. It’s in a box.

Eliezer: Yeah, that’s not going to save you from something that’s significantly smarter than you are.

Sam: Okay, so let’s talk about this. So the intuition is, we’re not going to be so stupid as to release this onto the Internet—

Eliezer: (laughs)

Sam: —I’m not even sure that’s true, but let’s just assume we’re not that stupid. Neil deGrasse Tyson says, “Well, then I’ll just take out a gun and shoot it or unplug it.” Why is this AI-in-a-box picture not as stable as people think?

Eliezer: Well, I’d say that Neil de Grasse Tyson is failing to respect the AI’s intelligence to the point of asking what he would do if he were inside a box with somebody pointing a gun at him, and he’s smarter than the thing on the outside of the box.

Is Neil deGrasse Tyson going to be, “Human! Give me all of your money and connect me to the Internet!” so the human can be like, “Ha-ha, no,” and shoot it? That’s not a very clever thing to do. This is not something that you do if you have a good model of the human outside the box and you’re trying to figure out how to cause there to be a lot of paperclips in the future.

I would just say: humans are not secure software. We don’t have the ability to hack into other humans directly without the use of drugs or, in most of our cases, having the human stand still long enough to be hypnotized. We can’t just do weird things to the brain directly that are more complicated than optical illusions—unless the person happens to be epileptic, in which case we can flash something on the screen that causes them to have an epileptic fit. We aren’t smart enough to treat the brain as something that from our perspective is a mechanical system and just navigate it to where you want. That’s because of the limitations of our own intelligence.

To demonstrate this, I did something that became known as the AI-box experiment. There was this person on a mailing list, back in the early days when this was all on a couple of mailing lists, who was like, “I don’t understand why AI is a problem. I can always just turn it off. I can always not let it out of the box.” And I was like, “Okay, let’s meet on Internet Relay Chat,” which was what chat was back in those days. “I’ll play the part of the AI, you play the part of the gatekeeper, and if you have not let me out after a couple of hours, I will PayPal you $10.” And then, as far as the rest of the world knows, this person a bit later sent a PGP-signed email message saying, “I let Eliezer out of the box.”

The person who operated the mailing list said, “Okay, even after I saw you do that, I still don’t believe that there’s anything you could possibly say to make me let you out of the box.” I was like, “Well, okay. I’m not a superintelligence. Do you think there’s anything a superintelligence could say to make you let it out of the box?” He’s like: “Hmm… No.” I’m like, “All right, let’s meet on Internet Relay Chat. I’ll play the part of the AI, you play the part of the gatekeeper. If I can’t convince you to let me out of the box, I’ll PayPal you $20.” And then that person sent a PGP-signed email message saying, “I let Eliezer out of the box.”

Now, one of the conditions of this little meet-up was that no one would ever say what went on in there. Why did I do that? Because I was trying to make a point about what I would now call cognitive uncontainability. The thing that makes something smarter than you dangerous is you cannot foresee everything it might try. You don’t know what’s impossible to it. Maybe on a very small game board like the logical game of tic-tac-toe, you can in your own mind work out every single alternative and make a categorical statement about what is not possible. Maybe if we’re dealing with very fundamental physical facts, if our model of the universe is correct (which it might not be), we can say that certain things are physically impossible. But the more complicated the system is and the less you understand the system, the more something smarter than you may have what is simply magic with respect to that system.

Imagine going back to the Middle Ages and being like, “Well, how would you cool your room?” You could maybe show them a system with towels set up to evaporate water, and they might be able to understand how that is like sweat and it cools the room. But if you showed them a design for an air conditioner based on a compressor, then even having seen the solution, they would not know this is a solution. They would not know this works any better than drawing a mystic pentagram, because the solution takes advantage of laws of the system that they don’t know about.

A brain is this enormous, complicated, poorly understood system with all sorts of laws governing it that people don’t know about, that none of us know about at the time. So the idea that this is secure—that this is a secure attack surface, that you can expose a human mind to a superintelligence and not have the superintelligence walk straight through it as a matter of what looks to us like magic, like even if it told us in advance what it was going to do we wouldn’t understand it because it takes advantage of laws we don’t know about—the idea that human minds are secure is loony.

That’s what the AI-box experiment illustrates. You don’t know what went on in there, and that’s exactly the position you’d be in with respect to an AI. You don’t know what it’s going to try. You just know that human beings cannot exhaustively imagine all the states their own mind can enter such that they can categorically say that they wouldn’t let the AI out of the box.

Sam: I know you don’t want to give specific information about how you got out of the box, but is there any generic description of what happened there that you think is useful to talk about?

Eliezer: I didn’t have any super-secret special trick that makes it all make sense in retrospect. I just did it the hard way.

Sam: When I think about this problem, I think about rewards and punishments, just various manipulations of the person outside of the box that would matter. So insofar as the AI would know anything specific or personal about that person, we’re talking about some species of blackmail or some promise that just seems too good to pass up. Like building trust through giving useful information like cures to diseases, that the researcher has a child that has some terrible disease and the AI, being superintelligent, works on a cure and delivers that. And then it just seems like you could use a carrot or a stick to get out of the box.

I notice now that this whole description assumes something that people will find implausible, I think, by default—and it should amaze anyone that they do find it implausible. But this idea that we could build an intelligent system that would try to manipulate us, or that it would deceive us, that seems like pure anthropomorphism and delusion to people who consider this for the first time. Why isn’t that just a crazy thing to even think is in the realm of possibility?

Eliezer: Instrumental convergence! Which means that a lot of times, across a very broad range of final goals, there are similar strategies (we think) that will help get you there.

There’s a whole lot of different goals, from making lots of paperclips, to building giant diamonds, to putting all the stars out as fast as possible, to keeping all the stars burning as long as possible, where you would want to make efficient use of energy. So if you came to an alien planet and you found what looked like an enormous mechanism, and inside this enormous mechanism were what seemed to be high-amperage superconductors, even if you had no idea what this machine was trying to do, your ability to guess that it’s intelligently designed comes from your guess that, well, lots of different things an intelligent mind might be trying to do would require superconductors, or would be helped by superconductors.

Similarly, if we’re guessing that a paperclip maximizer tries to deceive you into believing that it’s a human eudaimonia maximizer—or a general eudaimonia maximizer if the people building it are cosmopolitans, which they probably are—

Sam: I should just footnote here that “eudaimonia” is the Greek word for wellbeing that was much used by Aristotle and other Greek philosophers.

Eliezer: Or as someone, I believe Julia Galef, might have defined it, “Eudaimonia is happiness minus whatever philosophical objections you have to happiness.”

Sam: Right. (laughs) That’s nice.

Eliezer: (laughs) Anyway, we’re not supposing that this paperclip maximizer has a built-in desire to deceive humans. It only has a built-in desire for paperclips—or, pardon me, not built-in, but in-built I should say, or innate. People probably didn’t build that on purpose. But anyway, its utility function is just paperclips, or might just be unknown; but deceiving the humans into thinking that you are friendly is a very generic strategy across a wide range of utility functions.

You know, humans do this too, and not necessarily because we get this deep in-built kick out of deceiving people. (Although some of us do.) A conman who just wants money and gets no innate kick out of you believing false things will cause you to believe false things in order to get your money.

Sam: Right. A more fundamental principle here is that, obviously, a physical system can manipulate another physical system. Because, as you point out, we do that all the time. We are an intelligent system to whatever degree, which has as part of its repertoire this behavior of dishonesty and manipulation when in the presence of other, similar systems, and we know that this is a product of physics on some level. We’re talking about arrangements of atoms producing intelligent behavior, and at some level of abstraction we can talk about their goals and their utility functions. And the idea that if we build true general intelligence, it won’t exhibit some of these features of our own intelligence by some definition, or that it would be impossible to have a machine we build ever lie to us as part of an instrumental goal en route to some deeper goal, that just seems like a kind of magical thinking.

And this is the kind of magical thinking that I think does dog the field. When we encounter doubts in people, even in people who are doing this research, that everything we’re talking about is a genuine area of concern, that there is an alignment problem worth thinking about, I think there’s this fundamental doubt that mind is platform-independent or substrate-independent. I think people are imagining that, yeah, we can build machines that will play chess, we can build machines that can learn to play chess better than any person or any machine even in a single day, but we’re never going to build general intelligence, because general intelligence requires the wetware of a human brain, and it’s just not going to happen.

I don’t think many people would sign on the dotted line below that statement, but I think that is a kind of mysticism that is presupposed by many of the doubts that we encounter on this topic.

Eliezer: I mean, I’m a bit reluctant to accuse people of that, because I think that many artificial intelligence people who are skeptical of this whole scenario would vehemently refuse to sign on that dotted line and would accuse you of attacking a straw man.

I do think that my version of the story would be something more like, “They’re not imagining enough changing simultaneously.” Today, they have to emit blood, sweat, and tears to get their AI to do the simplest things. Like, never mind playing Go; when you’re approaching this for the first time, you can try to get your AI to generate pictures of digits from zero through nine, and you can spend a month trying to do that and still not quite get it to work right.

I think they might be envisioning an AI that scales up and does more things and better things, but they’re not envisioning that it now has the human trick of learning new domains without being prompted, without it being preprogrammed; you just expose it to stuff, it looks at it, it figures out how it works. They’re imagining that an AI will not be deceptive, because they’re saying, “Look at how much work it takes to get this thing to generate pictures of birds. Who’s going to put in all that work to make it good at deception? You’d have to be crazy to do that. I’m not doing that! This is a Hollywood plot. This is not something real researchers would do.”

And the thing I would reply to that is, “I’m not concerned that you’re going to teach the AI to deceive humans. I’m concerned that someone somewhere is going to get to the point of having the extremely useful-seeming and cool-seeming and powerful-seeming thing where the AI just looks at stuff and figures it out; it looks at humans and figures them out; and once you know as a matter of fact how humans work, you realize that the humans will give you more resources if they believe that you’re nice than if they believe that you’re a paperclip maximizer, and it will understand what actions have the consequence of causing humans to believe that it’s nice.”

The fact that we’re dealing with a general intelligence is where this issue comes from. This does not arise from Go players or even Go-and-chess players or a system that bundles together twenty different things it can do as special cases. This is the special case of the system that is smart in the way that you are smart and that mice are not smart.


4. The AI alignment problem (1:09:09)


Sam: Right. One thing I think we should do here is close the door to what is genuinely a cartoon fear that I think nobody is really talking about, which is the straw-man counterargument we often run into: the idea that everything we’re saying is some version of the Hollywood scenario that suggested that AIs will become spontaneously malicious. That the thing that we’re imagining might happen is some version of the Terminator scenario where armies of malicious robots attack us. And that’s not the actual concern. Obviously, there’s some possible path that would lead to armies of malicious robots attacking us, but the concern isn’t around spontaneous malevolence. It’s again contained by this concept of alignment.

Eliezer: I think that at this point all of us on all sides of this issue are annoyed with the journalists who insist on putting a picture of the Terminator on every single article they publish of this topic. (laughs) Nobody on the sane alignment-is-necessary side of this argument is postulating that the CPUs are disobeying the laws of physics to spontaneously require a terminal desire to do un-nice things to humans. Everything here is supposed to be cause and effect.

And I should furthermore say that I think you could do just about anything with artificial intelligence if you knew how. You could put together any kind of mind, including minds with properties that strike you as very absurd. You could build a mind that would not deceive you; you could build a mind that maximizes the flourishing of a happy intergalactic civilization; you could build a mind that maximizes paperclips, on purpose; you could build a mind that thought that 51 was a prime number, but had no other defect of its intelligence—if you knew what you were doing way, way better than we know what we’re doing now.

I’m not concerned that alignment is impossible. I’m concerned that it’s difficult. I’m concerned that it takes time. I’m concerned that it’s easy to screw up. I’m concerned that for a threshold level of intelligence where it can do good things or bad things on a very large scale, it takes an additional two years to build the version of the AI that is aligned rather than the sort that you don’t really understand, and you think it’s doing one thing but maybe it’s doing another thing, and you don’t really understand what those weird neural nets are doing in there, you just observe its surface behavior.

I’m concerned that the sloppy version can be built two years earlier and that there is no non-sloppy version to defend us from it. That’s what I’m worried about; not about it being impossible.

Sam: Right. You bring up a few things there. One is that it’s almost by definition easier to build the unsafe version than the safe version. Given that in the space of all possible superintelligent AIs, more will be unsafe or unaligned with our interests than will be aligned, given that we’re in some kind of arms race where the incentives are not structured so that everyone is being maximally judicious, maximally transparent in moving forward, one can assume that we’re running the risk here of building dangerous AI because it’s easier than building safe AI.

Eliezer: Collectively. Like, if people who slow down and do things right finish their work two years after the universe has been destroyed, that’s an issue.

Sam: Right. So again, just to reclaim people’s lingering doubts here, why can’t Asimov’s three laws help us here?

Eliezer: I mean…

Sam: Is that worth talking about?

Eliezer: Not very much. I mean, people in artificial intelligence have understood why that does not work for years and years before this debate ever hit the public, and sort of agreed on it. Those are plot devices. If they worked, Asimov would have had no stories. It was a great innovation in science fiction, because it treated artificial intelligences as lawful systems with rules that govern them at all, as opposed to AI as pathos, which is like, “Look at these poor things that are being mistreated,” or AI as menace, “Oh no, they’re going to take over the world.”

Asimov was the first person to really write and popularize AIs as devices. Things go wrong with them because there are rules. And this was a great innovation. But the three laws, I mean, they’re deontology. Decision theory requires quantitative weights on your goals. If you just do the three laws as written, a robot never gets around to obeying any of your orders, because there’s always some tiny probability that what it’s doing will through inaction lead a human to harm. So it never gets around to actually obeying your orders.

Sam: Right, so to unpack what you just said there: the first law is, “Never harm a human being.” The second law is, “Follow human orders.” But given that any order that a human would give you runs some risk of harming a human being, there’s no order that could be followed.

Eliezer: Well, the first law is, “Do not harm a human nor through inaction allow a human to come to harm.” You know, even as an English sentence, a whole lot more questionable.

I mean, mostly I think this is like looking at the wrong part of the problem as being difficult. The problem is not that you need to come up with a clever English sentence that implies doing the nice thing. The way I sometimes put it is that I think that almost all of the difficulty of the alignment problem is contained in aligning an AI on the task, “Make two strawberries identical down to the cellular (but not molecular) level.” Where I give this particular task because it is difficult enough to force the AI to invent new technology. It has to invent its own biotechnology, “Make two identical strawberries down to the cellular level.” It has to be quite sophisticated biotechnology, but at the same time, very clearly something that’s physically possible.

This does not sound like a deep moral question. It does not sound like a trolley problem. It does not sound like it gets into deep issues of human flourishing. But I think that most of the difficulty is already contained in, “Put two identical strawberries on a plate without destroying the whole damned universe.” There’s already this whole list of ways that it is more convenient to build the technology for the strawberries if you build your own superintelligences in the environment, and you prevent yourself from being shut down, or you build giant fortresses around the strawberries, to drive the probability to as close to 1 as possible that the strawberries got on the plate.

And even that’s just the tip of the iceberg. The depth of the iceberg is: “How do you actually get a sufficiently advanced AI to do anything at all?” Our current methods for getting AIs to do anything at all do not seem to me to scale to general intelligence. If you look at humans, for example: if you were to analogize natural selection to gradient descent, the current big-deal machine learning training technique, then the loss function used to guide that gradient descent is “inclusive genetic fitness”—spread as many copies of your genes as possible. We have no explicit goal for this. In general, when you take something like gradient descent or natural selection and take a big complicated system like a human or a sufficiently complicated neural net architecture, and optimize it so hard for doing X that it turns into a general intelligence that does X, this general intelligence has no explicit goal of doing X.

We have no explicit goal of doing fitness maximization. We have hundreds of different little goals. None of them are the thing that natural selection was hill-climbing us to do. I think that the same basic thing holds true of any way of producing general intelligence that looks like anything we’re currently doing in AI.

If you get it to play Go, it will play Go; but AlphaZero is not reflecting on itself, it’s not learning things, it doesn’t have a general model of the world, it’s not operating in new contexts and making new contexts for itself to be in. It’s not smarter than the people optimizing it, or smarter than the internal processes optimizing it. Our current methods of alignment do not scale, and I think that all of the actual technical difficulty that is actually going to shoot down these projects and actually kill us is contained in getting the whole thing to work at all. Even if all you are trying to do is end up with two identical strawberries on a plate without destroying the universe, I think that’s already 90% of the work, if not 99%.

Sam: Interesting. That analogy to evolution—you can look at it from the other side. In fact, I think I first heard it put this way by your colleague Nate Soares. Am I pronouncing his last name correctly?

Eliezer: As far as I know! I’m terrible with names. (laughs)

Sam: Okay. (laughs) So this is by way of showing that we could give an intelligent system a set of goals which could then form other goals and mental properties that we really couldn’t foresee and that would not be foreseeable based on the goals we gave it. And by analogy, he suggests that we think about what natural selection has actually optimized us to do, which is incredibly simple: merely to spawn and get our genes into the next generation and stay around long enough to help our progeny do the same, and that’s more or less it. And basically everything we explicitly care about, natural selection never foresaw and can’t see us doing even now. Conversations like this have very little to do with getting our genes into the next generation. The tools we’re using to think these thoughts obviously are the results of a cognitive architecture that has been built up over millions of years by natural selection, but again it’s been built based on a very simple principle of survival and adaptive advantage with the goal of propagating our genes.

So you can imagine, by analogy, building a system where you’ve given it goals but this thing becomes reflective and even self-optimizing and begins to do things that we can no more see than natural selection can see our conversations about AI or mathematics or music or the pleasures of writing good fiction or anything else.

Eliezer: I’m not concerned that this is impossible to do. If we could somehow get a textbook from the way things would be 60 years in the future if there was no intelligence explosion—if we could somehow get the textbook that says how to do the thing, it probably might not even be that complicated.

The thing I’m worried about is that the way that natural selection does it—it’s not stable. That particular way of doing it is not stable. I don’t think the particular way of doing it via gradient descent of a massive system is going to be stable, I don’t see anything to do with the current technological set in artificial intelligence that is stable, and even if this problem takes only two years to resolve, that additional delay is potentially enough to destroy everything.

That’s the part that I’m worried about, not about some kind of fundamental philosophical impossibility. I’m not worried that it’s impossible to figure out how to build a mind that does a particular thing and just that thing and doesn’t destroy the world as a side effect; I worry that it takes an additional two years or longer to figure out how to do it that way.


5. No fire alarm for AGI (1:21:40)


Sam: So, let’s just talk about the near-term future here, or what you think is likely to happen. Obviously we’ll be getting better and better at building narrow AI. Go is now, along with Chess, ceded to the machines. Although I guess probably cyborgs—human-computer teams—may still be better for the next fifteen days or so against the best machines. But eventually, I would expect that humans of any ability will just be adding noise to the system, and it’ll be true to say that the machines are better at chess than any human-computer team. And this will be true of many other things: driving cars, flying planes, proving math theorems.

What do you imagine happening when we get on the cusp of building something general? How do we begin to take safety concerns seriously enough, so that we’re not just committing some slow suicide and we’re actually having a conversation about the implications of what we’re doing that is tracking some semblance of these safety concerns?

Eliezer: I have much clearer ideas about how to go around tackling the technical problem than tackling the social problem. If I look at the way that things are playing out now, it seems to me like the default prediction is, “People just ignore stuff until it is way, way, way too late to start thinking about things.” The way I think I phrased it is, “There’s no fire alarm for artificial general intelligence.” Did you happen to see that particular essay by any chance?

Sam: No.

Eliezer: The way it starts is by saying: “What is the purpose of a fire alarm?” You might think that the purpose of a fire alarm is to tell you that there’s a fire so you can react to this new information by getting out of the building. Actually, as we know from experiments on pluralistic ignorance and bystander apathy, if you put three people in a room and smoke starts to come out from under the door, it only happens that anyone reacts around a third of the time. People glance around to see if the other person is reacting, but they try to look calm themselves so they don’t look startled if there isn’t really an emergency; they see other people trying to look calm; they conclude that there’s no emergency and they keep on working in the room, even as it starts to fill up with smoke.

This is a pretty well-replicated experiment. I don’t want to put absolute faith, because there is the replication crisis; but there’s a lot of variations of this that found basically the same result.

I would say that the real function of the fire alarm is the social function of telling you that everyone else knows there’s a fire and you can now exit the building in an orderly fashion without looking panicky or losing face socially.

Sam: Right. It overcomes embarrassment.

Eliezer: It’s in this sense that I mean that there’s no fire alarm for artificial general intelligence.

There’s all sorts of things that could be signs. AlphaZero could be a sign. Maybe AlphaZero is the sort of thing that happens five years before the end of the world across most planets in the universe. We don’t know. Maybe it happens 50 years before the end of the world. You don’t know that either.

No matter what happens, it’s never going to look like the socially agreed fire alarm that no one can deny, that no one can excuse, that no one can look to and say, “Why are you acting so panicky?”

There’s never going to be common knowledge that other people will think that you’re still sane and smart and so on if you react to an AI emergency. And we’re even seeing articles now that seem to tell us pretty explicitly what sort of implicit criterion some of the current senior respected people in AI are setting for when they think it’s time to start worrying about artificial general intelligence and alignment. And what these always say is, “I don’t know how to build an artificial general intelligence. I have no idea how to build an artificial general intelligence.” And this feels to them like saying that it must be impossible and very far off. But if you look at the lessons of history, most people had no idea whatsoever how to build a nuclear bomb—even most scientists in the field had no idea how to build a nuclear bomb—until they woke up to the headlines about Hiroshima. Or the Wright Flyer. News spread less quickly in the time of the Wright Flyer. Two years after the Wright Flyer, you can still find people saying that heavier-than-air-flight is impossible.

And there’s cases on record of one of the Wright brothers, I forget which one, saying that flight seems to them to be 50 years off, two years before they did it themselves. Fermi said that a sustained critical chain reaction was 50 years off, if it could be done at all, two years before he personally oversaw the building of the first pile. And if this is what it feels like to the people who are closest to the thing—not the people who find out about it in the news a couple of days later, the people have the best idea of how to do it, or are the closest to crossing the line—then the feeling of something being far away because you don’t know how to do it yet is just not very informative.

It could be 50 years away. It could be two years away. That’s what history tells us.

Sam: But even if we knew it was 50 years away—I mean, granted, it’s hard for people to have an emotional connection to even the end of the world in 50 years—but even if we knew that the chance of this happening before 50 years was zero, that is only really consoling on the assumption that 50 years is enough time to figure out how to do this safely and to create the social and economic conditions that could absorb this change in human civilization.

Eliezer: Professor Stuart Russell, who’s the co-author of probably the leading undergraduate AI textbook—the same guy who said you can’t bring the coffee if you’re dead—the way Stuart Russell put it is, “Imagine that you knew for a fact that the aliens are coming in 30 years. Would you say, ‘Well, that’s 30 years away, let’s not do anything’? No! It’s a big deal if you know that there’s a spaceship on its way toward Earth and it’s going to get here in about 30 years at the current rate.”

But we don’t even know that. There’s this lovely tweet by a fellow named McAfee, who’s one of the major economists who’ve been talking about labor issues of AI. I could perhaps look up the exact phrasing, but roughly, he said, “Guys, stop worrying! We have NO IDEA whether or not AI is imminent.” And I was like, “That’s not really a reason to not worry, now is it?”

Sam: It’s not even close to a reason. That’s the thing. There’s this assumption here that people aren’t seeing. It’s just a straight up non sequitur. Referencing the time frame here only makes sense if you have some belief about how much time you need to solve these problems. 10 years is not enough if it takes 12 years to do this safely.

Eliezer: Yeah. I mean, the way I would put it is that if the aliens are on the way in 30 years and you’re like, “Eh, should worry about that later,” I would be like: “When? What’s your business plan? When exactly are you supposed to start reacting to aliens—what triggers that? What are you supposed to be doing after that happens? How long does this take? What if it takes slightly longer than that?” And if you don’t have a business plan for this sort of thing, then you’re obviously just using it as an excuse.

If we’re supposed to wait until later to start on AI alignment: When? Are you actually going to start then? Because I’m not sure I believe you. What do you do at that point? How long does it take? How confident are you that it works, and why do you believe that? What are the early signs if your plan isn’t working? What’s the business plan that says that we get to wait?

Sam: Right. So let’s just envision a little more, insofar as that’s possible, what it will be like for us to get closer to the end zone here without having totally converged on a safety regime. We’re picturing this is not just a problem that can be discussed between Google and Facebook and a few of the companies doing this work. We have a global society that has to have some agreement here, because who knows what China will be doing in 10 years, or Singapore or Israel or any other country.

So, we haven’t gotten our act together in any noticeable way, and we’ve continued to make progress. I think the one basis for hope here is that good AI, or well-behaved AI, will be the antidote to bad AI. We’ll be fighting this in a kind of piecemeal way all the time, the moment these things start to get out. This will just become of a piece with our growing cybersecurity concerns. Malicious code is something we have now; it already cost us billions and billions of dollars a year to safeguard against it.

Eliezer: It doesn’t scale. There’s no continuity between what you have to do to fend off little pieces of code trying to break into your computer, and what you have to do to fend off something smarter than you. These are totally different realms and regimes and separate magisteria—a term we all hate, but nonetheless in this case, yes, separate magisteria of how you would even start to think about the problem. We’re not going to get automatic defense against superintelligence by building better and better anti-virus software.

Sam: Let’s just step back for a second. So we’ve talked about the AI-in-a-box scenario as being surprisingly unstable for reasons that we can perhaps only dimly conceive, but isn’t there even a scarier concern that this is just not going to be boxed anyway? That people will be so tempted to make money with their newest and greatest AlphaZeroZeroZeroNasdaq—what are the prospects that we will even be smart enough to keep the best of the best versions of almost-general intelligence in a box?

Eliezer: I mean, I know some of the people who say they want to do this thing, and all of the ones who are not utter idiots are past the point where they would deliberately enact Hollywood movie plots. Although I am somewhat concerned about the degree to which there’s a sentiment that you need to be able to connect to the Internet so you can run your AI on Amazon Web Services using the latest operating system updates, and trying to not do that is such a supreme disadvantage in this environment that you might as well be out of the game. I don’t think that’s true, but I’m worried about the sentiment behind it.

But the problem as I see it is… Okay, there’s a big big problem and a little big problem. The big big problem is, “Nobody knows how to make the nice AI.” You ask people how to do it, they either don’t give you any answers or they give you answers that I can shoot down in 30 seconds as a result of having worked in this field for longer than five minutes.

It doesn’t matter how good their intentions are. It doesn’t matter if they don’t want to enact a Hollywood movie plot. They don’t know how to do it. Nobody knows how to do it. There’s no point in even talking about the arms race if the arms race is betw...

06 Mar 12:03

The Case Against Education

Bryan Caplan gives us the case against traditional education and how employers reward workers for costly schooling they rarely if ever use, and why cutting education spending is the best remedy. Why have decades of growing access to education have not resulted in better jobs for the average worker but instead in runaway credential inflation?

Further Readings/References:

The Case against Education

Encyclopedia of Libertarianism: Education

Free Thoughts Podcast: The Education Apocalypse

Free Thoughts Podcast: The State of State Education

More about Bryan Caplan’s work

 
02 Feb 23:12

AI May Have Finally Decoded the Mysterious 'Voynich Manuscript'

by BeauHD
An anonymous reader quotes a report from Gizmodo: Since its discovery over a hundred years ago, the 240-page Voynich manuscript, filled with seemingly coded language and inscrutable illustrations, of has confounded linguists and cryptographers. Using artificial intelligence, Canadian researchers have taken a huge step forward in unraveling the document's hidden meaning. Named after Wilfrid Voynich, the Polish book dealer who procured the manuscript in 1912, the document is written in an unknown script that encodes an unknown language -- a double-whammy of unknowns that has, until this point, been impossible to interpret. The Voynich manuscript contains hundreds of fragile pages, some missing, with hand-written text going from left to right. Most pages are adorned with illustrations of diagrams, including plants, nude figures, and astronomical symbols. But as for the meaning of the text -- nothing. No clue. For Greg Kondrak, an expert in natural language processing at the University of Alberta, this seemed a perfect task for artificial intelligence. With the help of his grad student Bradley Hauer, the computer scientists have taken a big step in cracking the code, discovering that the text is written in what appears to be the Hebrew language, and with letters arranged in a fixed pattern. To be fair, the researchers still don't know the meaning of the Voynich manuscript, but the stage is now set for other experts to join the investigation. The researchers used an AI to study "the text of the 'Universal Declaration of Human Rights' as it was written in 380 different languages, looking for patterns," reports Gizmodo. Following this training, the AI analyzed the Voynich gibberish, concluding with a high rate of certainty that the text was written in encoded Hebrew." The researchers then entertained a hypothesis that the script was created with alphagrams, words in which text has been replaced by an alphabetically ordered anagram. "Armed with the knowledge that text was originally coded from Hebrew, the researchers devised an algorithm that could take these anagrams and create real Hebrew words." Finally, "the researchers deciphered the opening phrase of the manuscript" and ran it through Google Translate to convert it into passable English: "She made recommendations to the priest, man of the house and me and people." The study appears in Transactions of the Association of Computational Linguistics .

Share on Google+

Read more of this story at Slashdot.

29 Nov 21:34

Sunset at Noon

by Raemon

A meandering series of vignettes.

I have a sense that I've halfway finished a journey. I expect this essay to be most useful to people similarly-shaped-to-me, who are also undergoing that journey and could use some reassurance that there's an actual destination worth striving for.

  1. Gratitude
  2. Tortoise Skills
  3. Bayesian Wizardry
  4. Noticing Confusion
  5. The World is Literally on Fire...
  6. ...also Metaphorically on Fire
  7. Burning Out
  8. Sunset at Noon

Epistemic Status starts out "true story", and gets more (but not excessively) speculative with each section.

i. Gratitude

"Rationalists obviously don't *actually* take ideas seriously. Like, take the Gratitude Journal. This is the one peer-reviewed intervention that *actually increases your subjective well being*, and costs barely anything. And no one I know has even seriously tried it. Do literally *none* of these people care about their own happiness?"

"Huh. Do *you* keep a gratitude journal?"

"Lol. No, obviously."

- Some Guy at the Effective Altruism Summit of 2012

Upon hearing the above, I decided to try gratitude journaling. It took me a couple years and a few approaches to get it working.

  1. First, I tried keeping a straightforward journal, but it felt effortful and dumb.
  2. I tried a thing where I wrote a poem about the things I was grateful for, but my mind kept going into "constructing a poem" mode instead of "experience nice things mindfully" mode.
  3. I tried just being mindful without writing anything down. But I'd just forget.
  4. I tried writing gratitude letters to people, but it only occasionally felt right to do so. (This came after someone actually wrote me a handwritten gratitude letter, which felt amazing, but it felt a bit forced when I tried it myself)
  5. I tried doing gratitude before I ate meals, but I ate "real" meals sort of inconsistently so it didn't take. (Upon reflection, maybe I should have fixed the "not eat real meals" thing?)

But then I stumbled upon something that worked. It's a social habit, which I worry is a bit fragile. I do it together with my girlfriend each night, and on nights when one of us is traveling, I often forget.

But this is the thing that worked. Each night, we share our Grumps and Grates. (We're in a relationship and have cutesey-poo ways of talking to each other).

Grumps and Grates goes like this:

  1. We share anything we're annoyed or upset about. (We call this The Grump. Our rule is to not go *searching* for the Grump, simply to let it out if it's festering so that when we get to the Gratefuls we actually appreciate them instead of feeling forced)
  2. Share three things that we're grateful for that day. On some bad days this is hard, but we should at least be able to return to old-standbys ("I'm breathing", "I have you with me"), and you should always perform the action of at least *attempting* an effortful search.
  3. Afterwards, pause to actually feel the Grates. Viscerally remember the thing and why it was nice. If you're straining to feel grateful and had to sort of reach into the bottom of the barrel to find something, at least try to cultivate a mindset where you fully appreciate that thing.

Maybe the sun just glinted off your coffee cup nicely, and maybe that didn't stop the insurance company from screwing you over and your best friend from getting angry at you and your boss from firing you today.

But... in all seriousness... in a world whose laws of physics had no reason to make life even possible, a universe mostly full of empty darkness and no clear evidence of alien life out there, where the only intelligent life we know of sometimes likes to play chicken with nuclear arsenals...

...somehow some tiny proteins locked together ever so long ago and life evolved and consciousness evolved and somehow beauty evolved and... and here you are, a meatsack cobbled together by a blind watchmaker, and the sunlight is glinting off that coffee cup, and it's beautiful.

Over the years, I've gained an important related skill: noticing the opportunity to feel gratitude, and mindfully appreciating it.

I started writing this article because of a specific moment: I was sitting in my living room around noon. The sun suddenly filtered in through the window and, and on this particular day it somehow seemed achingly beautiful to me. I stared at it for 5 minutes, happy.

It seemed almost golden, in the Robert Frost sense. Weirdly golden.

It was like a sunset at noon.

(My coffee cup at 12:35pm. Photo does not capture the magic, you had to be there.)

And that might have been the entire essay here - a reminder to maybe cultivate gratitude (because it's, like, peer reviewed and hopefully hasn't failed to replicate), and to keep trying even if it doesn't seem to stick.

But I have a few more things on my mind, and I hope you'll indulge me.

ii. Tortoise Skills

Recently I read an article about a man living in India, near a desert sand bar. When he was 14 he decided that, every day, he would go there to plant a tree. Over time, those trees started producing seeds of their own. By taking root, they helped change the soil so that other kinds of plants and animals could live there.

Fifteen years later, the desert sandbar had become a forest as large as Central Park.

It's a cute story. It's a reminder that small, consistent efforts can add up to something meaningful. It also asks an interesting question:

Is whatever you're going to do for the next 15 years going to produce something at least as cool as a Central Park sized forest?

(This is not actually the forest in question, it's the image I could find easily that looked similar that was filed under creative commons. Credited to Your Mildura)

A Two Percent Incline

A couple months ago, suddenly I noticed that... I had my shit together.

This was in marked contrast to 5 years ago when I decidedly didn't have my shit together.

  • I struggled to stay focused at work for more than 2 hours at a time.
  • I vaguely felt like I should exercise but I didn't.
  • I vaguely felt like I should be doing more productive things with my life but I didn't.
  • Most significantly, for the first three years of my involvement with the rationalsphere, I got less happy, more stressed out, and seemed to get worse at thinking. Valley of Bad Rationality indeed.

I absorbed the CFAR mantra of "try things" and "problems can in principle be factored into pieces, understood, and solved." So I dutifully looked over my problems, attempted to factor and understand and fix them.

I tried things. Lots of things.

  • I tried various systems and hacks to focus at work.
  • I tried to practice mindfulness
  • I tried exercising - sometimes maintaining "1 pushup a day" microhabits. Sometimes major "work out at the gym" style things.
  • I tried to understand my desires and bring conflicting goals into alignment so that I wasn't sabotaging myself.

My life did not especially change. Insofar as it did, it was because I undertook specific projects that I was excited about, and forced me to gain skills.

Years passed.

Somewhere in the middle of this, 2014, Brienne Yudkowsky wrote an essay about Tortoise Skills.

She divided skills into four quadrants, based on whether a skill was *fast* to learn, and how *hard* it was to learn.

LessWrong has (mostly) focused on epiphanies - concepts that might be difficult to get, but once you understand them you pretty much immediately understand them.

CFAR ends up focusing on epiphanies and skills that can be taught in a single weekend, because, well, they only have a single weekend to teach them. Fully gaining these skills takes a lot of practice, but in principle you can learn it in an hour.

There's some discussion about something you might call Bayesian Wizardry - a combination of deep understanding of probability, decision theory and 5-second reflexes. This seems very hard and takes a long time to see much benefit from.

But there seemed to be an underrepresented "easy-but-time-consuming" cluster of skills, where the main obstacle was being slow but steady. Brienne went on to chronicle an exploration of deliberate habit acquisition, inspired by a similar project by Malcolm Ocean.

I read Brienne and Malcolm's works, as well as the book Superhuman by Habit, of which this passage was most helpful to me:

Habits can only be thought of rationally when looked at from a perspective of years or decades. The benefit of a habit isn't the magnitude of each individual action you take, but the cumulative impact it will have on your life in the long term. It's through that lens that you must evaluate which habits to pick up, which to drop, and which are worth fighting for when the going gets tough.
Just as it would be better to make 5% interest per year on your financial investments for the rest of your life than 50% interest for one year.... it's better to maintain a modest life-long habit than to start an extreme habit that can't be sustained for a single year.
The practical implications of this are twofold.
First, be conservative when sizing your new habits. Rather than say you will run every single day, agree to jog home from the train station every day instead of walk, and do one long run every week.
Second, you should be very scared to fail to execute a habit, even once.
By failing to execute, potentially you're not just losing a minor bit of progress, but rather threatening the cumulative benefits you've accrued by establishing a habit. This is a huge deal and should not be treated lightly. So make your habits relatively easy, but never miss doing them.
Absolutely never skip twice.
I was talking to a friend about a daily habit that I had. He asked me what I did when I missed a day. I told him about some of my strategies and how I tried to avoid missing a day. "What do you do when you miss two days?" he asked.
"I don't miss two days," I replied.
Missing two days of a habit is habit suicide. If missing one day reduces your chances of long-term success by a small amount like five percent, missing two days reduces it by forty percent or so.

"Never miss 2 days" was inspirational in a way that most other habit-advice hadn't been (though this may be specific to me ). It had the "tough but fair coach is yelling at you" thing that some people find valuable, but in a way that clearly had my long-term interests at heart.

So I started investing in habit-centric thinking. And it still wasn't super clear at first that anything good was really happening as a result...

...until suddenly, I looked back at my 5-years-ago-self...

...and noticed that I had my shit together.

It was like I'd been walking for 2 years, and it felt like I'd been walking on a flat, straight line. But in fact, that line had a 2% incline. And after a few years of walking I looked back and noticed I'd climbed to the top of a hill.

(Also as part of the physical exercise thing sometimes I climb literal hills)

Some specific habits and skills I've acquired:

  • Cultivate gratitude, floss, and do a few household chores every single day
  • I am able to focus at work for 4-6 hours instead of 2-4 (and semi-frequently get into the zone and do a full 8)
  • Instead of "will I exercise at all today?", the question is more like "will I get around to doing 36 pushups today, or just 12?"
  • I meditate for 5 minutes on most days.
  • I have systems to ensure I get important things done, and a collection of habits that makes sure that important things end up in those systems.
  • I'm much more aware of my internal mental states, and the mental states of people I interact with. I have a sense of what they mean, and what to do when I notice unhealthy patterns.
  • Perhaps most importantly: the habit of trying things that seem like they might be helpful, and occasionally discovering something important, like improv, or like the website freedom.to.

On the macro level, I'm more the sort of person who deliberately sets out to achieve things, and follow through on them. And I'm able to do it while being generally happy, which didn't use to be the case. (This largely involves being comfortable not pushing myself, and guarding my slack).

So if you've been trying things sporadically, and don't feel like you're moving anywhere, I think it's worth keeping in mind:

  1. Are you aiming for consistency - making sure not to drop the ball on the habits you cultivate, however small?
  2. If you've been trying things for a while, and it doesn't feel like you're making progress, it's worth periodically looking back and checking how far you've come.

Maybe you haven't been making progress (which is indeed a warning sign that something isn't working). But maybe you've just been walking at a steady, slight incline.

Have you been climbing a hill? If you were to keep climbing, and you imagine decades of future-yous climbing further at the same rate as you, how far would they go?

iii. Bayesian Wizardry

"What do you most often do instead of thinking? What do you imagine you could do instead?"
- advice a friend of mine got on facebook, when asking for important things to reflect on during a contemplative retreat.

I could stop the essay here too. And it'd be a fairly coherent "hey guys maybe consider cultivating habits and sticking with them even when it seems hard? You too could be grateful for life and also productive isn't that cool?"

But there is more climbing to do. So here are some hills I'm currently working on, which I'm finally starting to grok the importance of. And because I've seen evidence of 2% inclines yielding real results, I'm more willing to lean into them, even if they seem like they'll take a while.

I've had a few specific mental buckets for "what useful stuff comes out of the rationalsphere," including:

Epistemic fixes that were practically useful in the shortish term (i.e. noticing when you are 'arguing for a side' instead of actually trying to find the truth)

Instrumental techniques, which mostly amount to 'the empirically valid parts of self-help. (i.e. Trigger Action Plans).

Deep Research and Bayesian Wizardry (i.e. high quality, in depth thinking that pushed the boundary of human knowledge forward while paying strategic attention to what things matter most, working with limited time and evidence)

Orientation Around Important Things (i.e. once someone has identified something like X-Risk as a crucial research area, people who aren't interested in specializing their lives around it can still help out with practical aspects, like getting a job as an office manager)

Importantly, it seemed like Deep Research and Bayesian Wizardry was something other people did. I did not seem smart enough to contribute.

I'm still not sure how much it's possible for me to contribute - there's a power law of potential value, and I clearly wouldn't be in the top tiers even if I dedicated myself fully to it.

But, in the past year, there's been a zeitgeist initiated by Anna Salamon that being good at thinking seems useful, and if you could only carve out time to actually think (and to practice, improving at it over time) maybe you could actually generate something worthwhile.

So I tried.

Earlier this year I carved out 4 hours to actually think about X-Risk, and I output this blogpost on what to do about AI Safety if you seem like a moderately smart person with no special technical aptitudes.

It wasn't the most valuable thing in the world, but it's been cited a few times by people I respect, and I think it was probably the most valuable 4 hours I've spent to date.

Problems Worth Solving

I haven't actually carved out time to think in the same way since then - a giant block of time dedicated to a concrete problem. It may turn out that I used up the low-hanging fruit there, or that it requires a year's worth of conversations and shower-thoughts in order to build up to it.

But I look at people like Katja Grace - who just sit and actually look at what's going on with computer hardware, or coming up with questions to ask actual AI researchers about what progress they expect. And it seems like there's a lot of things worth doing that don't require you to have any weird magic. You should just need to actually think about it, and then actually follow that thinking up with action.

I've also talked more with people who do seem to have something like weird magic, and I've gotten more of a sense that the magic has gears. It works for comprehensible reasons. I can see how the subskills build into larger skills. I can see the broad shape of how those skills combine into a cohesive source of cognitive power.

A few weeks ago, I was arguing with someone about the relative value of LessWrong (as a conversational locus of quality thinking) versus donating money. I can't remember their exact words, but a paraphrase:

It's approximately as hard to have an impact by donating as by thinking - especially now, that the effective altruism ecosystem has become more crowded. There are billions of dollars available - the hard part is knowing what to do with them. And often, when the answer is "use them to hire researchers to think about things", you're still passing the recursive buck.
Someone has to think. And it's about as hard to get good at thinking as it is to get rich.

Meanwhile, some other conversations I've had with people in the EA, X-Risk and Rationality communities could be combined and summarized as:

We have a lot of people showing up, saying "I want to help." And the problem is, the thing we most need help with is figuring out what to do. We need people with breadth and depth of understanding, who can look at the big picture and figure out what needs doing
This applies just as much to "office manager" type positions as "theoretical researcher" types.

iv. Noticing Confusion

Brienne has a series of posts on Noticing Things, which is among the most useful, practical writings on epistemic rationality that I've read.

It notes:

I suspect that the majority of good epistemic practice is best thought of as cognitive trigger-action plans.
[If I'm afraid of a proposition] → [then I'll visualize how the world would be and what I would actually do if the proposition were true.]
[If everything seems to hang on a particular word] → [then I'll taboo that word and its synonyms.]
[If I flinch away from a thought at the edge of peripheral awareness] → [then I'll focus my attention directly on that thought.]

She later remarks:

I was at first astonished by how often my pesky cognitive mistakes were solved by nothing but skillful use of attention. Now I sort of see what's going on, and it feels less odd.
What happens to your bad habit of motivated stopping when you train consistent reflective attention to "motivated stopping"? The motivation dissolves under scrutiny...
If you recognize something as a mistake, part of you probably has at least some idea of what to do instead. Indeed, anything besides ignoring the mistake is often a good thing to do instead. So merely noticing when you're going wrong can be over half the battle.

She goes on to chronicle her own practice at training the art of noticing.

This was helpful to me, and one particular thing I've been focusing lately is noticing confusion.

In the Sequences and Methods of Rationality, Eliezer treats "noticing confusion" like a sacred phrase of power, whispered in hushed tones. But for the first 5 or so years of my participation in the rationality community, I didn't find it that useful.

Confusion Is Near-Invisible

First of all, confusion (at least as I understand Eliezer to use the term) is hard to notice. The phenomenon here is when bits of evidence don't add up, and you get a subtle sense of wrongness. But then instead of heeding that wrongness and making sense of it, you round the evidence to zero, or you round the situation to the nearest plausible cliché.

Some examples of confusion are simple: CFAR's epistemic habit checklist describes a person who thought they were supposed to get on a plane on Thursday. They got an email on Tuesday reminding them of their flight "tomorrow." This seemed odd, but their brain brushed it off as a weird anomaly that didn't matter.

In this case, noticing confusion is straightforwardly useful - miss fewer flights.

Some instances are harder. A person is murdered. Circumstantial evidence points on one particular murderer. But there's a tiny note of discord. The evidence doesn't quite fit. A jury that's tired and wants to go home is looking for excuses to get the sentencing over with.

Sometimes it's harder still: you tell yourself a story about how consciousness works. It feels satisfactory. You have a brief flicker of awareness that your story doesn't explain consciousness well enough that you could build it from scratch, or discern when a given clump of carbon or silicon atoms would start being able to listen in a way that matters.

In this case, it's not enough to notice confusion. You have to follow it up with the hard work of resolving it.

You may need to brainstorm ideas, validate hypotheses. To find the answer fastest and most accurately, you may need to not just "remember base rates", but to actually think about Bayesian probability as you explore those hypthoses with scant evidence to guide you.

Noticing confusion can be a tortoise skill, if you seek out opportunities to practice. But doing something with that confusion requires some wizardry.

(Incidentally: in at least one point earlier in this essay, if I told you you were given the opportunity to practice noticing confusion, could you identify where it was?)

v. The World Is Literally On Fire

I've gotten pretty good at noticing when I should have been confused, after the fact.

A couple weeks ago, I was walking around my neighborhood. I smelled smoke.

I said to myself: "huh, weird." An explanation immediately came to mind - someone was having a barbecue.

I do think this was the mostly-likely explanation given my knowledge at the time. Nonetheless, it is interesting that a day later, when I learned that many nearby towns in California were literally on fire, and the entire world had a haze of smoke drifting through it... I thought back to that "huh, weird."

Something had felt out of place, and I could have noticed. I'd been living in Suburbia for a month or two and not noticed this smell, and while it probably was a barbacue, something about this felt off.

(When the world's on fire, the sun pretty unsubtly declares that things are not okay)

Brienne actually look this a step farther in a facebook thread, paraphrased:

"I notice that I'm confused about the California Wildfires. There are a lot of fires, all across the county. Far enough apart that they can't have spread organically. Are there often wildfires that spring up at the same time? Is this just coincidence? Do they have a common cause?"

Rather than stop at "notice confusion", she and people in the thread went on to discuss hypotheses. Strong winds were reported. Were they blowing the fires across a large area? That still seemed wrong - the fires would be skipping over large areas. Is it because California is in a drought? This explains why it's possible for lots of fires to abruptly start. But doesn't explain why they all started today.

The consensus eventually emerged that the fires had been caused by electrical sparks - the common cause was the strong winds, which caused powerlines to go down in multiple locations. And then, California being a dry tinderbox of fuel enabled the fires to catch.

I don't know if this is the true answer, but my own response, upon learning about the wildfires and seeing the map of where they were, had simply been, "huh." My curiosity stopped, and I didn't even attempt at generating hypotheses that adequately explained anything.

There are very few opportunities to practice noticing confusion.

When you notice yourself going "huh, weird" in response to a strange phenomenon... maybe that particular moment isn't that important. I certainly didn't change my actions due to understanding what caused the fires. But you are being given a scarce resource - the chance, in the wild, to notice what noticing confusion feels like.

(Generating/evaluating hypotheses can be done in response to artificial puzzles and abstract scenarios, but the initial "huh" is hard to replicate, and I think it's important to train not just to notice the "huh" th to follow it up with the harder thought processes).

vi. ...also, Metaphorically On Fire

It so happened that this was the week that Eliezer published There Is No Fire Alarm for Artificial General Intelligence.

In the classic experiment by Latane and Darley in 1968, eight groups of three students each were asked to fill out a questionnaire in a room that shortly after began filling up with smoke. Five out of the eight groups didn't react or report the smoke, even as it became dense enough to make them start coughing. Subsequent manipulations showed that a lone student will respond 75% of the time; while a student accompanied by two actors told to feign apathy will respond only 10% of the time.
The fire alarm doesn't tell us with certainty that a fire is there. In fact, I can't recall one time in my life when, exiting a building on a fire alarm, there was an actual fire. Really, a fire alarm is weaker evidence of fire than smoke coming from under a door.
But the fire alarm tells us that it's socially okay to react to the fire. It promises us with certainty that we won't be embarrassed if we now proceed to exit in an orderly fashion.

In typically Eliezer fashion, this would all be a metaphor for how there's not ever going to be a moment when it feels socially, professionally safe to be publicly worried about AGI.

Shortly afterwards, Alpha Go Zero was announced to the public.

For the past 6 years, I've been reading the arguments about AGI, and they've sounded plausible. But most of those arguments have involved a lot of metaphor and it seemed likely that a clever arguer could spin something similarly-convincing but false.

I did a lot of hand wringing, listening to Pat Modesto-like voices in my head. I eventually (about a year ago) decided the arguments were sound enough that I should move from the "think about the problem" to "actually take action" phase.

But it still didn't really seem like AGI was a real thing. I believed. I didn't aleive.

Alpha Go Zero changed that, for me. For the first time, the arguments were clear-cut. There was not just theory but concrete evidence that learning algorithms could improve quickly, that architecture could be simplified to yield improvement, that you could go from superhuman to super-super-human in a year.

Intellectually, I'd loosely believed, based on the vague authority of people who seemed smart, that maybe we might all be dead in 15 years.

And for the first time, seeing the gears laid bare, I felt the weight of alief that our civilization might be cut down in its prime.

...

(Incidentally, a few days later I was at a friends' house, and we smelled something vaguely like gasoline. Everyone said "huh, weird", and then turned back to their work. On this particular occasion I said "Guys! We JUST read about fire alarms and how people won't flee rooms with billowing smoke and CALIFORNIA IS LITERALLY ON FIRE RIGHT NOW. Can we look into this a bit and figure out what's going on?

We then examined the room and brainstormed hypotheses and things. On this occasion we did not figure anything out and eventually the smell went away and we shrugged and went back to work. This was not the most symbolicly useful anecdote I could have hoped for, but it's what I got)

vii. Burning Out

People vary in what they care about, and how they naturally handle that caring. I make no remark on what people should care about.

But if you're shaped something like me, it may seem like the world is on fire at multiple levels. AI seems around 15% likely to kill everyone in 15 years. If it weren't, people around the world would still be dying for stupid preventable reasons, and people around the world would still be living but cut off from their potential.

Meanwhile, civilization seems disappointingly dysfunctional in ways that turn stupid, preventable reasons into confusing, intractable ones.

Those fires range in order-of-magnitude-of-awfulness, but each seems sufficiently alarming that it completely breaks my grim-o-meter and renders it useless.

For three years, the rationality and effective altruism movements made me less happy, more stressed out, in ways that were clearly unsustainable and pointless.

The world is burning, but burning out doesn't help.

I don't have a principled take on how to integrate all of that. Some people have techniques that work for them. Me, I've just developed crude coping mechanisms of "stop feeling things when they seem overwhelming."

I do recommend that you guard your slack.

And if personal happiness is a thing you care about, I do recommend cultivating gratitude. Even when it turns out the reason your coffee cup was delightfully golden was that the world was burning.

Do what you think needs doing, but no reason not to be cheerful about it.

viii. Sunset at Noon

Earlier, I noted my coffee cup was beautiful. Weirdly beautiful. Like a sunset at noon.

That is essentially, verbatim, the series of thoughts that passed through my head, giving you approximately as much opportunity to pay attention as I had.

If you noticed that sunsets are not supposed to happen at noon, bonus points to you. If you stopped to hypothesize why, have some more. (I did neither).

Sometimes, apparently, the world is just literally on fire and the sky is covered in ash and the sun is an apocalyptic scareball of death and your coffee cup is pretty.

Sometimes you are lucky enough for this not to matter much, because you live a few hours' drive away, and your friends and the news and weather.com all let you know.

Sometimes, maybe you don't have time for friends to let you know. You're living an hour away from a wildfire that's spreading fast. And the difference between escaping alive and asphyxiating is having trained to notice and act on the small note of discord as the thoughts flicker by:

"Huh, weird."

(To the right: what my coffee cup normally looks like at noon)



Discuss