Shared posts

20 May 12:48

A proposal for the UK’s answer to Darpa

To replicate the spark that lit the US tech sector, it’s best to double down on what we’re best at.
29 Oct 12:56

There’s magic in mess: Why you should embrace a disorderly desk

by Tim Harford
Highlights

messy-desk-animation

In 1726, during a long voyage from London to Philadelphia, a young printer hatched the idea of using a notebook to systematically chart his efforts to become a better man. He set out 13 virtues — including industry, justice, tranquillity and temperance — and his plan was to focus on each in turn in an endless quest for self-improvement, recording failures with a black spot in his journal. The virtue journal worked, and the black marks became scarcer and scarcer.

Benjamin Franklin kept up this practice for his entire life. What a life it was: Franklin invented bifocals and a clean-burning stove; he proved that lightning was a form of electricity and then tamed it with the lightning conductor; he charted the Gulf Stream. He organised a lending library, a fire brigade and a college. He was America’s first postmaster-general, its ambassador to France, even the president of Pennsylvania.

And yet the great man had a weakness — or so he thought. His third virtue was Order. “Let all your things have their places; let each part of your business have its time,” he wrote. While all the other virtues were mastered, one by one, Franklin never quite managed to get his desk or his diary tidy.

“My scheme of Order gave me the most trouble,” he reflected six decades later. “My faults in it vexed me so much, and I made so little progress in amendment, and had such frequent relapses, that I was almost ready to give up the attempt.” Observers agreed. One described how callers on Franklin “were amazed to behold papers of the greatest importance scattered in the most careless way over the table and floor”.

Franklin was a messy fellow his entire life, despite 60 years of trying to reform himself, and remained convinced that if only he could learn to tidy up, he would become a more successful and productive person. But any outsider can see that it is absurd to think such a rich life could have been yet further enriched by assiduous use of a filing cabinet. Franklin was deluding himself. But his error is commonplace; we’re all tidy-minded people, admiring ourselves when we keep a clean desk and uneasy when we do not. Tidiness can be useful but it’s not always a virtue. Even though Franklin never let himself admit it, there can be a kind of magic in mess.

Why is it so difficult to keep things tidy? A clue comes in Franklin’s motto, “Let all your things have their places … ” That seems to make sense. Humans tend to have an excellent spatial memory. The trouble is that modern office life presents us with a continuous stream of disparate documents arriving not only by post but via email and social media. What are the “places”, both physical and digital, for this torrent of miscellanea?

Categorising documents of any kind is harder than it seems. The writer and philosopher Jorge Luis Borges once told of a fabled Chinese encyclopaedia, the “Celestial Emporium of Benevolent Knowledge”, which organised animals into categories such as: a) belonging to the emperor, c) tame, d) sucking pigs, f) fabulous, h) included in the present classification, and m) having just broken the water pitcher.

Borges’s joke has a point: categories are difficult. Distinctions that seem practically useful — who owns what, who did what, what might make a tasty supper — are utterly unusable when taken as a whole. The problem is harder still when we must file many incoming emails an hour, building folder structures that need to make sense months or years down the line. Borgesian email folders might include: a) coming from the boss, b) tedious, c) containing appointments, d) sent to the entire company, e) urgent, f) sexually explicit, g) complaints, h) personal, i) pertaining to the year-end review, and j) about to exceed the memory allocation on the server.

Regrettably, many of these emails fit into more than one category and while each grouping itself is perfectly meaningful, they do not fit together. Some emails clearly fit into a pattern, but many do not. One may be the start of a major project or the start of nothing at all, and it will rarely be clear which is which at the moment that email arrives in your inbox. Giving documents — whether physical or digital — a proper place, as Franklin’s motto recommends, requires clairvoyance. Failing that, we muddle through the miscellany, hurriedly imposing some kind of practical organising principle on what is a rapid and fundamentally messy flow of information.

When it comes to actual paper, there’s always the following beautiful approach. Invented in the early 1990s by Yukio Noguchi, an emeritus professor at Hitotsubashi University in Tokyo and author of books such as Super Organised Method, Noguchi doesn’t try to categorise anything. Instead, he places each incoming document in a large envelope. He writes the envelope’s contents neatly on its edge, and lines them up on a bookshelf, their contents visible like the spines of books. Now the moment of genius: each time he uses an envelope, Noguchi places it back on the left of the shelf. Over time, recently used documents will shuffle themselves towards the left, and never-used documents will accumulate on the right. Archiving is easy: every now and again, Noguchi removes the documents on the right. To find any document in this system, he simply asks himself how recently he has seen it. It is a filing system that all but organises itself.

But wait a moment. Eric Abrahamson and David Freedman, authors of A Perfect Mess, offer the following suggestion: “Turn the row of envelopes so that the envelopes are stacked vertically instead of horizontally, place the stack on your desktop, and get rid of the envelopes.” Those instructions transform the shelf described in Super Organised Method into an old-fashioned pile of papers on a messy desk. Every time a document arrives or is consulted, it goes back on the top of the pile. Unused documents gradually settle at the bottom. Less elegant, perhaps, but basically the same system.

Computer scientists may recognise something rather familiar about this arrangement: it mirrors the way that computers handle their memory systems. Computers use memory “caches”, which are small but swift to access. A critical issue is which data should be prioritised and put in the fastest cache. This cache management problem is analogous to asking which paper you should keep on your desk, which should be in your desk drawer, and which should be in offsite storage in New Jersey. Getting the decision right makes computers a lot faster — and it can make you faster too.

Fifty years ago, computer scientist Laszlo Belady proved that one of the fastest and most effective simple algorithms is to wait until the cache is full, then start ejecting the data that haven’t been used recently. This rule is called “Least Recently Used” or LRU — and it works because in computing, as in life, the fact that you’ve recently needed to use something is a good indication that you will need it again soon.

As Brian Christian and Tom Griffiths observe in their recent book Algorithms to Live By, while a computer might use LRU to manage a memory cache, Noguchi’s Super Organised Method uses the same rule to manage paper: recently used stuff on the left, stuff that you haven’t looked at for ages on the right. A pile of documents also implements LRU: recently touched stuff on the top, everything else sinks to the bottom.

This isn’t to say that a pile of paper is always the very best organisational system. That depends on what is being filed, and whether several people have to make sense of the same filing system or not. But the pile of papers is not random. It has its own pragmatic structure based simply on the fact that whatever you’re using tends to stay visible and accessible. Obsolete stuff sinks out of sight. Your desk may look messy to other people but you know that, thanks to the LRU rule, it’s really an efficient self-organising rapid-access cache.

If all this sounds to you like self-justifying blather from untidy colleagues, you might just be a “filer” rather than a “piler”. The distinction between the two was first made in the 1980s by Thomas Malone, a management professor at the Massachusetts Institute of Technology. Filers like to establish a formal organisational structure for their paper documents. Pilers, by contrast, let pieces of paper build up around their desks or, as we have now learnt to say, implement an LRU-cache.

To most of us, it may seem obvious that piling is dysfunctional while filing is the act of a serious professional. Yet when researchers from the office design company Herman Miller looked at high-performing office workers, they found that they tended to be pilers. They let documents accumulate on their desks, used their physical presence as a reminder to do work, and relied on subtle cues — physical alignment, dog-ears, or a stray Post-it note — to orient themselves.

In 2001, Steve Whittaker and Julia Hirschberg, researchers at AT&T Labs, studied filers and pilers in a real office environment, and discovered why the messy approach works better than it seemingly has any right to. They tracked the behaviour of the filers and pilers over time. Who accumulated the biggest volume of documents? Whose archives worked best? And who struggled most when an office relocation forced everyone to throw paperwork away?

One might expect that disciplined filers would have produced small, useful filing systems. But Whittaker and Hirschberg found, instead, that they were sagging under the weight of bloated, useless archives. The problem was a bad case of premature filing. Paperwork would arrive, and then the filer would have to decide what to do with it. It couldn’t be left on the desk — that would be untidy. So it had to be filed somewhere. But most documents have no long-term value, so in an effort to keep their desks clear, the filers were using filing cabinets as highly structured waste-paper baskets. Useful material was hemmed in by well-organised dross.

“You can’t know your own information future,” says Whittaker, who is now a professor of psychology at University of California Santa Cruz, and co-author of The Science of Managing Our Digital Stuff. People would create folder structures that made sense at the time but that would simply be baffling to their own creators months or years later. Organisational categories multiplied. One person told Whittaker and Hirschberg: “I had so much stuff filed. I didn’t know where everything was, and I’d found that I had created second files for something in what seemed like a logical place, but not the only logical place … I ended up having the same thing in two places or I had the same business unit stuff in five different places.”

As for the office move, it was torture for the filers. They had too much material and had invested too much time in organising it. One commented that it was “gruesome … you’re casting off your first-born”. Whittaker reminds me that these people were not discarding their children. They weren’t even discarding family photographs and keepsakes. They were throwing away office memos and dog-eared corporate reports. “It’s very visceral,” Whittaker says. “People’s identity is wrapped up in their jobs, and in the information professions your identity is wrapped up with your information.” And yet the happy-go-lucky pilers, in their messy way, coped far better. They used their desks as temporary caches for documents. The good stuff would remain close at hand, easy to use and to throw away when finished. Occasionally, the pilers would grab a pile, riffle through it and throw most of it away. And when they did file material, they did so in archives that were small, practical and actively used.

Whittaker points out that the filers struggled because the categories they created turned out not to work well as times changed. This suggests that tidiness can work, but only when documents or emails arrive with an obvious structure. My own desk is messy but my financial records are neat — not because they’re more important but because the record-keeping required for accountancy is predictable.

One might object that whatever researchers have concluded about paper documents is obsolete, as most documents are now digital. Surely the obvious point of stress is now the email inbox? But Whittaker’s interest in premature filing actually started in 1996 with an early study of email overload. “The thing we observed was failed folders,” he says. “Tiny email folders with one or two items.”

It turns out that the fundamental problem with email is the same as the problem with paper on the desk: people try to clear their inbox by sorting the email into folders but end up prematurely filing in folder structures that turn out not to work well. In 2011, Whittaker and colleagues published a research paper with the title “Am I Wasting My Time Organizing Email?”. The answer is: yes, you are. People who use the search function find their email more quickly than those who click through carefully constructed systems of folders. The folder system feels better organised but, unless the information arrives with a predictable structure, creating folders is laborious and worse than useless.

So we know that carefully filing paper documents is often counterproductive. Email should be dumped in a few broad folders — or one big archive — rather than a careful folder hierarchy. What then should we do with our calendars? There are two broad approaches. One — analogous to the “filer” approach — is to organise one’s time tightly, scheduling each task in advance and using the calendar as a to-do list. As Benjamin Franklin expressed it: “Let each part of your business have its time.” The alternative avoids the calendar as much as possible, noting only fixed appointments. Intuitively, both approaches have something going for them, so which works best?

Fortunately we don’t need to guess, because three psychologists, Daniel Kirschenbaum, Laura Humphrey and Sheldon Malett, have already run the experiment. Thirty-five years ago, Kirschenbaum and his colleagues recruited a group of undergraduates for a short course designed to improve their study skills. The students were randomly assigned one of three possible pieces of coaching. There was a control group, which was given simple time-management advice such as, “Take breaks of five to 10 minutes after every ½-1½ hour study session.” The other two groups got those tips but they were also given much more specific advice as to how to use their calendars. The “monthly plan” group were instructed to set goals and organise study activities across the space of a month; in contrast, the “daily plan” group were told to micromanage their time, planning activities and setting goals within the span of a single day.

The researchers assumed that the planners who set quantifiable daily goals would do better than those with vaguer monthly plans. In fact, the daily planners started brightly but quickly became hopelessly demotivated, with their study effort collapsing to eight hours a week — even worse than the 10 hours for those with no plan at all. But the students on the monthly plans maintained a consistent study habit of 25 hours a week throughout the course. The students’ grades, unsurprisingly, reflected their work effort.

The problem is that the daily plans get derailed. Life is unpredictable. A missed alarm, a broken washing machine, a dental appointment, a friend calling by for a coffee — or even the simple everyday fact that everything takes longer than you expect — all these obstacles proved crushing for people who had used their calendar as a to-do list.

Like the document pilers, the monthly planners adopted a loose, imperfect and changeable system that happens to work just fine in a loose, imperfect and changeable world. The daily planners, like the filers, imposed a tight, tidy-minded system that shattered on contact with a messy world.

Some people manage to take this lesson to extremes. Marc Andreessen — billionaire entrepreneur and venture capitalist — decided a decade ago to stop writing anything in his calendar. If something was worth doing, he figured, it was worth doing immediately. “I’ve been trying this tactic as an experiment,” he wrote in 2007. “And I am so much happier, I can’t even tell you.”

Arnold Schwarzenegger has adopted much the same approach. He insisted on keeping his diary clear when he was a film star. He even tried the same policy when governor of California. “Appointments are always a no-no. Planning ahead is a no-no,” he told The New York Times. Politicians, lobbyists and activists had to treat him like a popular walk-up restaurant: they showed up and hoped to get a slot. Of course, this was in part a pure status play. But it was more than that. Schwarzenegger knew that an overstuffed diary allows no room to adapt to circumstances.

Naturally, Schwarzenegger and Andreessen can make the world wait to meet them. You and I can’t. But we probably could take a few steps in the same direction, making fewer firm commitments to others and to ourselves, leaving us the flexibility to respond to what life throws at us. A plan that is too finely woven will soon lie in tatters. Daily plans are tidy but life is messy.

The truth is that getting organised is often a matter of soothing our anxieties — or the anxieties of tidy-minded colleagues. It can simply be an artful way of feeling busy while doing nothing terribly useful. Productivity guru Merlin Mann, host of a podcast called Back To Work, has a telling metaphor. Imagine making sandwiches in a deli, says Mann. In comes the first sandwich order. You’re about to reach for the mayonnaise and a couple of slices of sourdough. But then more orders start coming in.

Mann knows all too well how we tend to react. Instead of making the first sandwich, we start to ponder organisational systems. Separate the vegetarian and the meat? Should toasted sandwiches take priority?

There are two problems here. First, there is no perfect way to organise a fast-moving sandwich queue. Second, the time we spend trying to get organised is time we don’t spend getting things done. Just make the first sandwich. If we just got more things done, decisively, we might find we had less need to get organised.

Of course, sometimes we need a careful checklist (if, say, we’re building a house) or a sophisticated reference system (if we’re maintaining a library, for example). But most office workers are neither construction managers nor librarians. Yet we share Benjamin Franklin’s mistaken belief that if only we were more neatly organised, then we would live more productive and more admirable lives. Franklin was too busy inventing bifocals and catching lightning to get around to tidying up his life. If he had been working in a deli, you can bet he wouldn’t have been organising sandwich orders. He would have been making sandwiches.
Image by Benjamin Swanson. This article was first published in the Financial Times magazine and is inspired by ideas from my new book, “Messy“. (US) (UK)

Free email updates

(You can unsubscribe at any time)

Email Address

30 Apr 10:10

How Politicians Poisoned Statistics

by Tim Harford
Highlights

We have more data — and the tools to analyse and share them — than ever before. So why is the truth so hard to pin down?

In January 2015, a few months before the British general election, a proud newspaper resigned itself to the view that little good could come from the use of statistics by politicians. An editorial in the Guardian argued that in a campaign that would be “the most fact-blitzed in history”, numerical claims would settle no arguments and persuade no voters. Not only were numbers useless for winning power, it added, they were useless for wielding it, too. Numbers could tell us little. “The project of replacing a clash of ideas with a policy calculus was always dubious,” concluded the newspaper. “Anyone still hankering for it should admit their number’s up.”

This statistical capitulation was a dismaying read for anyone still wedded to the idea — apparently a quaint one — that gathering statistical information might help us understand and improve our world. But the Guardian’s cynicism can hardly be a surprise. It is a natural response to the rise of “statistical bullshit” — the casual slinging around of numbers not because they are true, or false, but to sell a message.

Politicians weren’t always so ready to use numbers as part of the sales pitch. Recall Ronald Reagan’s famous suggestion to voters on the eve of his landslide defeat of President Carter: “Ask yourself, ‘Are you better off now than you were four years ago?’” Reagan didn’t add any statistical garnish. He knew that voters would reach their own conclusions.

The British election campaign of spring last year, by contrast, was characterised by a relentless statistical crossfire. The shadow chancellor of the day, Ed Balls, declared that a couple with children (he didn’t say which couple) had lost £1,800 thanks to the government’s increase in value added tax. David Cameron, the prime minister, countered that 94 per cent of working households were better off thanks to recent tax changes, while the then deputy prime minister Nick Clegg was proud to say that 27 million people were £825 better off in terms of the income tax they paid.

Could any of this be true? Yes — all three claims were. But Ed Balls had reached his figure by summing up extra VAT payments over several years, a strange method. If you offer to hire someone for £100,000, and then later admit you meant £25,000 a year for a four-year contract, you haven’t really lied — but neither have you really told the truth. And Balls had looked only at one tax. Why not also consider income tax, which the government had cut? Clegg boasted about income-tax cuts but ignored the larger rise in VAT. And Cameron asked to be evaluated only on his pre-election giveaway budget rather than the tax rises he had introduced earlier in the parliament — the equivalent of punching someone on the nose, then giving them a bunch of flowers and pointing out that, in floral terms, they were ahead on the deal.

Each claim was narrowly true but broadly misleading. Not only did the clashing numbers confuse but none of them helped answer the crucial question of whether Cameron and Clegg had made good decisions in office.

To ask whether the claims were true is to fall into a trap. None of these politicians had any interest in playing that game. They were engaged in another pastime entirely.

Thirty years ago, the Princeton philosopher Harry Frankfurt published an essay in an obscure academic journal, Raritan. The essay’s title was “On Bullshit”. (Much later, it was republished as a slim volume that became a bestseller.) Frankfurt was on a quest to understand the meaning of bullshit — what was it, how did it differ from lies, and why was there so much of it about?

Frankfurt concluded that the difference between the liar and the bullshitter was that the liar cared about the truth — cared so much that he wanted to obscure it — while the bullshitter did not. The bullshitter, said Frankfurt, was indifferent to whether the statements he uttered were true or not. “He just picks them out, or makes them up, to suit his purpose.”

Statistical bullshit is a special case of bullshit in general, and it appears to be on the rise. This is partly because social media — a natural vector for statements made purely for effect — are also on the rise. On Instagram and Twitter we like to share attention-grabbing graphics, surprising headlines and figures that resonate with how we already see the world. Unfortunately, very few claims are eye-catching, surprising or emotionally resonant because they are true and fair. Statistical bullshit spreads easily these days; all it takes is a click.

Consider a widely shared list of homicide “statistics” attributed to the “Crime Statistics Bureau — San Francisco”, asserting that 81 per cent of white homicide victims were killed by “blacks”. It takes little effort to establish that the Crime Statistics Bureau of San Francisco does not exist, and not much more digging to discover that the data are utterly false. Most murder victims in the United States are killed by people of their own race; the FBI’s crime statistics from 2014 suggest that more than 80 per cent of white murder victims were killed by other white people.

Somebody, somewhere, invented the image in the hope that it would spread, and spread it did, helped by a tweet from Donald Trump, the current frontrunner for the Republican presidential nomination, that was retweeted more than 8,000 times. One can only speculate as to why Trump lent his megaphone to bogus statistics, but when challenged on Fox News by the political commentator Bill O’Reilly, he replied, “Hey, Bill, Bill, am I gonna check every statistic?”

Harry Frankfurt’s description of the bullshitter would seem to fit Trump perfectly: “He does not care whether the things he says describe reality correctly.”

While we can’t rule out the possibility that Trump knew the truth and was actively trying to deceive his followers, a simpler explanation is that he wanted to win attention and to say something that would resonate with them. One might also guess that he did not check whether the numbers were true because he did not much care one way or the other. This is not a game of true and false. This is a game of politics.

While much statistical bullshit is careless, it can also be finely crafted. “The notion of carefully wrought bullshit involves … a certain inner strain,” wrote Harry Frankfurt but, nevertheless, the bullshit produced by spin-doctors can be meticulous. More conventional politicians than Trump may not much care about the truth but they do care about being caught lying.

Carefully wrought bullshit was much in evidence during last year’s British general election campaign. I needed to stick my nose in and take a good sniff on a regular basis because I was fact-checking on behalf of the BBC’s More or Less programme. Again and again I would find myself being asked on air, “Is that claim true?” and finding that the only reasonable answer began with “It’s complicated”.

Take Ed Miliband’s claim before the last election that “people are £1,600 a year worse off” than they were when the coalition government came to power. Was that claim true? Arguably, yes.

But we need to be clear that by “people”, the then Labour leader was excluding half the adult population. He was not referring to pensioners, benefit recipients, part-time workers or the self-employed. He meant only full-time employees, and, more specifically, only their earnings before taxes and benefits.

Even this narrower question of what was happening to full-time earnings is a surprisingly slippery one. We need to take an average, of course. But what kind of average? Labour looked at the change in median wages, which were stagnating in nominal terms and falling after inflation was taken into account.

That seems reasonable — but the median is a problematic measure in this case. Imagine nine people, the lowest-paid with a wage of £1, the next with a wage of £2, up to the highest-paid person with a wage of £9. The median wage is the wage of the person in the middle: it’s £5.

Now imagine that everyone receives a promotion and a pay rise of £1. The lowly worker with a wage of £1 sees his pay packet double to £2. The next worker up was earning £2 and now she gets £3. And so on. But there’s also a change in the composition of the workforce: the best-paid worker retires and a new apprentice is hired at a wage of £1. What’s happened to people’s pay? In a sense, it has stagnated. The pattern of wages hasn’t changed at all and the median is still £5.

But if you asked the individual workers about their experiences, they would all tell you that they had received a generous pay rise. (The exceptions are the newly hired apprentice and the recent retiree.) While this example is hypothetical, at the time Miliband made his comments something similar was happening in the real labour market. The median wage was stagnating — but among people who had worked for the same employer for at least a year, the median worker was receiving a pay rise, albeit a modest one.

Another source of confusion: if wages for the low-paid and the high-paid are rising but wages in the middle are sagging, then the median wage can fall, even though the median wage increase is healthy. The UK labour market has long been prone to this kind of “job polarisation”, where demand for jobs is strongest for the highest and lowest-paid in the economy. Job polarisation means that the median pay rise can be sizeable even if median pay has not risen.

Confused? Good. The world is a complicated place; it defies description by sound bite statistics. No single number could ever answer Ronald Reagan’s question — “Are you better off now than you were four years ago?” — for everyone in a country.

So, to produce Labour’s figure of “£1,600 worse off”, the party’s press office had to ignore the self-employed, the part-timers, the non-workers, compositional effects and job polarisation. They even changed the basis of their calculation over time, switching between different measures of wages and different measures of inflation, yet miraculously managing to produce a consistent answer of £1,600. Sometimes it’s easier to make the calculation produce the number you want than it is to reprint all your election flyers.

Very few claims are eye-catching, surprising or emotionally resonant because they are true and fair

Such careful statistical spin-doctoring might seem a world away from Trump’s reckless retweeting of racially charged lies. But in one sense they were very similar: a political use of statistics conducted with little interest in understanding or describing reality. Miliband’s project was not “What is the truth?” but “What can I say without being shown up as a liar?”

Unlike the state of the UK job market, his incentives were easy to understand. Miliband needed to hammer home a talking point that made the government look bad. As Harry Frankfurt wrote back in the 1980s, the bullshitter “is neither on the side of the true nor on the side of the false. His eye is not on the facts at all … except insofar as they may be pertinent to his interest in getting away with what he says.”

Such complexities put fact-checkers in an awkward position. Should they say that Ed Miliband had lied? No: he had not. Should they say, instead, that he had been deceptive or misleading? Again, no: it was reasonable to say that living standards had indeed been disappointing under the coalition government.

Nevertheless, there was a lot going on in the British economy that the figure omitted — much of it rather more flattering to the government. Full Fact, an independent fact-checking organisation, carefully worked through the paper trail and linked to all the relevant claims. But it was powerless to produce a fair and representative snapshot of the British labour market that had as much power as Ed Miliband’s seven-word sound bite. No such snapshot exists. Truth is usually a lot more complicated than statistical bullshit.

On July 16 2015, the UK health secretary Jeremy Hunt declared: “Around 6,000 people lose their lives every year because we do not have a proper seven-day service in hospitals. You are 15 per cent more likely to die if you are admitted on a Sunday compared to being admitted on a Wednesday.”

This was a statistic with a purpose. Hunt wanted to change doctors’ contracts with the aim of getting more weekend work out of them, and bluntly declared that the doctors’ union, the British Medical Association, was out of touch and that he would not let it block his plans: “I can give them 6,000 reasons why.”

Despite bitter opposition and strike action from doctors, Hunt’s policy remained firm over the following months. Yet the numbers he cited to support it did not. In parliament in October, Hunt was sticking to the 15 per cent figure, but the 6,000 deaths had almost doubled: “According to an independent study conducted by the BMJ, there are 11,000 excess deaths because we do not staff our hospitals properly at weekends.”

Arithmetically, this was puzzling: how could the elevated risk of death stay the same but the number of deaths double? To add to the suspicions about Hunt’s mathematics, the editor-in-chief of the British Medical Journal, Fiona Godlee, promptly responded that the health secretary had publicly misrepresented the BMJ research.

Undaunted, the health secretary bounced back in January with the same policy and some fresh facts: “At the moment we have an NHS where if you have a stroke at the weekends, you’re 20 per cent more likely to die. That can’t be acceptable.”

All this is finely wrought bullshit — a series of ever-shifting claims that can be easily repeated but are difficult to unpick. As Hunt jumped from one form of words to another, he skipped lightly ahead of fact-checkers as they tried to pin him down. Full Fact concluded that Hunt’s statement about 11,000 excess deaths had been untrue, and asked him to correct the parliamentary record. His office responded with a spectacular piece of bullshit, saying (I paraphrase) that whether or not the claim about 11,000 excess deaths was true, similar claims could be made that were.

So, is it true? Do 6,000 people — or 11,000 — die needlessly in NHS hospitals because of poor weekend care? Nobody knows for sure; Jeremy Hunt certainly does not. It’s not enough to show that people admitted to hospital at the weekend are at an increased risk of dying there. We need to understand why — a question that is essential for good policy but inconvenient for politicians.

One possible explanation for the elevated death rate for weekend admissions is that the NHS provides patchy care and people die as a result. That is the interpretation presented as bald fact by Jeremy Hunt. But a more straightforward explanation is that people are only admitted to hospital at the weekend if they are seriously ill. Less urgent cases wait until weekdays. If weekend patients are sicker, it is hardly a surprise that they are more likely to die. Allowing non-urgent cases into NHS hospitals at weekends wouldn’t save any lives, but it would certainly make the statistics look more flattering. Of course, epidemiologists try to correct for the fact that weekend patients tend to be more seriously ill, but few experts have any confidence that they have succeeded.

A more subtle explanation is that shortfalls in the palliative care system may create the illusion that hospitals are dangerous. Sometimes a patient is certain to die, but the question is where — in a hospital or a palliative hospice? If hospice care is patchy at weekends then a patient may instead be admitted to hospital and die there. That would certainly reflect poor weekend care. It would also add to the tally of excess weekend hospital deaths, because during the week that patient would have been admitted to, and died in, a palliative hospice. But it is not true that the death was avoidable.

Does it seem like we’re getting stuck in the details? Well, yes, perhaps we are. But improving NHS care requires an interest in the details. If there is a problem in palliative care hospices, it will not be fixed by improving staffing in hospitals.

“Even if you accept that there’s a difference in death rates,” says John Appleby, the chief economist of the King’s Fund health think-tank, “nobody is able to say why it is. Is it lack of diagnostic services? Lack of consultants? We’re jumping too quickly from a statistic to a solution.”

When one claim is discredited, Jeremy Hunt’s office simply asserts that another one can be found to take its place

This matters — the NHS has a limited budget. There are many things we might want to spend money on, which is why we have the National Institute for Health and Care Excellence (Nice) to weigh up the likely benefits of new treatments and decide which offer the best value for money.

Would Jeremy Hunt’s push towards a seven-day NHS pass the Nice cost-benefit threshold? Probably not. Our best guess comes from a 2015 study by health economists Rachel Meacock, Tim Doran and Matt Sutton, which estimates that the NHS has many cheaper ways to save lives. A more comprehensive assessment might reach a different conclusion but we don’t have one because the Department for Health, oddly, hasn’t carried out a formal health impact assessment of the policy it is trying to implement.

This is a depressing situation. The government has devoted considerable effort to producing a killer number: Jeremy Hunt’s “6,000 reasons” why he won’t let the British Medical Association stand in his way. It continues to produce statistical claims that spring up like hydra heads: when one claim is discredited, Hunt’s office simply asserts that another one can be found to take its place. Yet the government doesn’t seem to have bothered to gather the statistics that would actually answer the question of how the NHS could work better.

This is the real tragedy. It’s not that politicians spin things their way — of course they do. That is politics. It’s that politicians have grown so used to misusing numbers as weapons that they have forgotten that used properly, they are tools.

You complain that your report would be dry. The dryer the better. Statistics should be the dryest of all reading,” wrote the great medical statistician William Farr in a letter in 1861. Farr sounds like a caricature of a statistician, and his prescription — convey the greatest possible volume of information with the smallest possible amount of editorial colour — seems absurdly ill-suited to the modern world.

But there is a middle ground between the statistical bullshitter, who pays no attention to the truth, and William Farr, for whom the truth must be presented without adornment. That middle ground is embodied by the recipient of William Farr’s letter advising dryness. She was the first woman to be elected to the Royal Statistical Society: Florence Nightingale.

Nightingale is the most celebrated nurse in British history, famous for her lamplit patrols of the Barrack Hospital in Scutari, now a district of Istanbul. The hospital was a death trap, with thousands of soldiers from the Crimean front succumbing to typhus, cholera and dysentery as they tried to recover from their wounds in cramped conditions next to the sewers. Nightingale, who did her best, initially believed that the death toll was due to lack of food and supplies. Then, in the spring of 1855, a sanitary commission sent from London cleaned up the hospital, whitewashing the walls, carting away filth and dead animals and flushing out the sewers. The death rate fell sharply.

Nightingale returned to Britain and reviewed the statistics, concluding that she had paid too little attention to sanitation and that most military and medical professions were making the same mistake, leading to hundreds of thousands of deaths. She began to campaign for better public health measures, tighter laws on hygiene in rented properties, and improvements to sanitation in barracks and hospitals across the country. In doing so, a mere nurse had to convince the country’s medical and military establishments, led by England’s chief medical officer, John Simon, that they had been doing things wrong all their lives.

A key weapon in this lopsided battle was statistical evidence. But Nightingale disagreed with Farr on how that evidence should be presented. “The dryer the better” would not serve her purposes. Instead, in 1857, she crafted what has become known as the Rose Diagram, a beautiful array of coloured wedges showing the deaths from infectious diseases before and after the sanitary improvements at Scutari.

When challenged by Bill O’Reilly on Fox News, Trump replied, ‘Hey Bill, Bill, am I gonna check every statistic?’

The Rose Diagram isn’t a dry presentation of statistical truth. It tells a story. Its structure divides the death toll into two periods — before the sanitary improvements, and after. In doing so, it highlights a sharp break that is less than clear in the raw data. And the Rose Diagram also gently obscures other possible interpretations of the numbers — that, for example, the death toll dropped not because of improved hygiene but because winter was over. The Rose Diagram is a marketing pitch for an idea. The idea was true and vital, and Nightingale’s campaign was successful. One of her biographers, Hugh Small, argues that the Rose Diagram ushered in health improvements that raised life expectancy in the UK by 20 years and saved millions of lives.

What makes Nightingale’s story so striking is that she was able to see that statistics could be tools and weapons at the same time. She educated herself using the data, before giving it the makeover it required to convince others. Though the Rose Diagram is a long way from “the dryest of all reading”, it is also a long way from bullshit. Florence Nightingale realised that the truth about public health was so vital that it could not simply be recited in a monotone. It needed to sing.

The idea that a graph could change the world seems hard to imagine today. Cynicism has set in about statistics. Many journalists draw no distinction between a systematic review of peer-reviewed evidence and a survey whipped up in an afternoon to sell biscuits or package holidays: it’s all described as “new research”. Politicians treat statistics not as the foundation of their argument but as decoration — “spray-on evidence” is the phrase used by jaded civil servants. But a freshly painted policy without foundations will not last long before the cracks show through.

“Politicians need to remember: there is a real world and you want to try to change it,” says Will Moy, the director of Full Fact. “At some stage you need to engage with the real world — and that is where the statistics come in handy.”

That should be no problem, because it has never been easier to gather and analyse informative statistics. Nightingale and Farr could not have imagined the data that modern medical researchers have at their fingertips. The gold standard of statistical evidence is the randomised controlled trial, because using a randomly chosen control group protects against biased or optimistic interpretations of the evidence. Hundreds of thousands of such trials have been published, most of them within the past 25 years. In non-medical areas such as education, development aid and prison reform, randomised trials are rapidly catching on: thousands have been conducted. The British government, too, has been supporting policy trials — for example, the Education Endowment Foundation, set up with £125m of government funds just five years ago, has already backed more than 100 evaluations of educational approaches in English schools. It favours randomised trials wherever possible.

The frustrating thing is that politicians seem quite happy to ignore evidence — even when they have helped to support the researchers who produced it. For example, when the chancellor George Osborne announced in his budget last month that all English schools were to become academies, making them independent of the local government, he did so on the basis of faith alone. The Sutton Trust, an educational charity which funds numerous research projects, warned that on the question of whether academies had fulfilled their original mission of improving failing schools in poorer areas, “our evidence suggests a mixed picture”. Researchers at the LSE’s Centre for Economic Performance had a blunter description of Osborne’s new policy: “a non-evidence based shot in the dark”.

This should be no surprise. Politicians typically use statistics like a stage magician uses smoke and mirrors. Over time, they can come to view numbers with contempt. Voters and journalists will do likewise. No wonder the Guardian gave up on the idea that political arguments might be settled by anything so mundane as evidence. The spin-doctors have poisoned the statistical well.

But despite all this despair, the facts still matter. There isn’t a policy question in the world that can be settled by statistics alone but, in almost every case, understanding the statistical background is a tremendous help. Hetan Shah, the executive director of the Royal Statistical Society, has lost count of the number of times someone has teased him with the old saying about “lies, damned lies and statistics”. He points out that while it’s easy to lie with statistics, it’s even easier to lie without them.

Perhaps the lies aren’t the real enemy here. Lies can be refuted; liars can be exposed. But bullshit? Bullshit is a stickier problem. Bullshit corrodes the very idea that the truth is out there, waiting to be discovered by a careful mind. It undermines the notion that the truth matters. As Harry Frankfurt himself wrote, the bullshitter “does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”

 

Written for and first published in the FT Magazine

Free email updates

(You can unsubscribe at any time)

Email Address

01 Jan 14:53

Multi-tasking: how to survive in the 21st century

by Tim Harford
Highlights

Modern life now forces us to do a multitude of things at once — but can we? Should we?

Forget invisibility or flight: the superpower we all want is the ability to do several things at once. Unlike other superpowers, however, being able to multitask is now widely regarded as a basic requirement for employability. Some of us sport computers with multiple screens, to allow tweeting while trading pork bellies and frozen orange juice. Others make do with reading a Kindle while poking at a smartphone and glancing at a television in the corner with its two rows of scrolling subtitles. We think nothing of sending an email to a colleague to suggest a quick coffee break, because we can feel confident that the email will be read within minutes.

All this is simply the way the modern world works. Multitasking is like being able to read or add up, so fundamental that it is taken for granted. Doing one thing at a time is for losers — recall Lyndon Johnson’s often bowdlerised dismissal of Gerald Ford: “He can’t fart and chew gum at the same time.”

The rise of multitasking is fuelled by technology, of course, and by social change as well. Husbands and wives no longer specialise as breadwinners and homemakers; each must now do both. Work and play blur. Your friends can reach you on your work email account at 10 o’clock in the morning, while your boss can reach you on your mobile phone at 10 o’clock at night. You can do your weekly shop sitting at your desk and you can handle a work query in the queue at the supermarket.

This is good news in many ways — how wonderful to be able to get things done in what would once have been wasted time! How delightful the variety of it all is! No longer must we live in a monotonous, Taylorist world where we must painstakingly focus on repetitive tasks until we lose our minds.

And yet we are starting to realise that the blessings of a multitasking life are mixed. We feel overwhelmed by the sheer number of things we might plausibly be doing at any one time, and by the feeling that we are on call at any moment.

And we fret about the unearthly appetite of our children to do everything at once, flipping through homework while chatting on WhatsApp, listening to music and watching Game of Thrones. (According to a recent study by Sabrina Pabilonia of the US Bureau of Labor Statistics, for over half the time that high-school students spend doing homework, they are also listening to music, watching TV or otherwise multitasking. That trend is on the increase.) Can they really handle all these inputs at once? They seem to think so, despite various studies suggesting otherwise.

And so a backlash against multitasking has begun — a kind of Luddite self-help campaign. The poster child for uni-tasking was launched on the crowdfunding website Kickstarter in December 2014. For $499 — substantially more than a multifunctional laptop — “The Hemingwrite” computer promised a nice keyboard, a small e-ink screen and an automatic cloud back-up. You couldn’t email on the Hemingwrite. You couldn’t fool around on YouTube, and you couldn’t read the news. All you could do was type. The Hemingwrite campaign raised over a third of a million dollars.

The Hemingwrite (now rebranded the Freewrite) represents an increasingly popular response to the multitasking problem: abstinence. Programs such as Freedom and Self-Control are now available to disable your browser for a preset period of time. The popular blogging platform WordPress offers “distraction-free writing”. The Villa Stéphanie, a hotel in Baden-Baden, offers what has been branded the “ultimate luxury”: a small silver switch beside the hotel bed that will activate a wireless blocker and keep the internet and all its temptations away.

The battle lines have been drawn. On one side: the culture of the modern workplace, which demands that most of us should be open to interruption at any time. On the other, the uni-tasking refuseniks who insist that multitaskers are deluding themselves, and that focus is essential. Who is right?

The ‘cognitive cost’

There is ample evidence in favour of the proposition that we should focus on one thing at a time. Consider a study led by David Strayer, a psychologist at the University of Utah. In 2006, Strayer and his colleagues used a high-fidelity driving simulator to compare the performance of drivers who were chatting on a mobile phone to drivers who had drunk enough alcohol to be at the legal blood-alcohol limit in the US. Chatting drivers didn’t adopt the aggressive, risk-taking style of drunk drivers but they were unsafe in other ways. They took much longer to respond to events outside the car, and they failed to notice a lot of the visual cues around them. Strayer’s infamous conclusion: driving while using a mobile phone is as dangerous as driving while drunk.

Less famous was Strayer’s finding that it made no difference whether the driver was using a handheld or hands-free phone. The problem with talking while driving is not a shortage of hands. It is a shortage of mental bandwidth.

Yet this discovery has made little impression either on public opinion or on the law. In the United Kingdom, for example, it is an offence to use a hand-held phone while driving but perfectly legal if the phone is used hands-free. We’re happy to acknowledge that we only have two hands but refuse to admit that we only have one brain.

Another study by Strayer, David Sanbonmatsu and others, suggested that we are also poor judges of our ability to multitask. The subjects who reported doing a lot of multitasking were also the ones who performed poorly on tests of multitasking ability. They systematically overrated their ability to multitask and they displayed poor impulse control. In other words, wanting to multitask is a good sign that you should not be multitasking.

We may not immediately realise how multitasking is hampering us. The first time I took to Twitter to comment on a public event was during a televised prime-ministerial debate in 2010. The sense of buzz was fun; I could watch the candidates argue and the twitterati respond, compose my own 140-character profundities and see them being shared. I felt fully engaged with everything that was happening. Yet at the end of the debate I realised, to my surprise, that I couldn’t remember anything that Brown, Cameron and Clegg had said.

A study conducted at UCLA in 2006 suggests that my experience is not unusual. Three psychologists, Karin Foerde, Barbara Knowlton and Russell Poldrack, recruited students to look at a series of flashcards with symbols on them, and then to make predictions based on patterns they had recognised. Some of these prediction tasks were done in a multitasking environment, where the students also had to listen to low- and high-pitched tones and count the high-pitched ones. You might think that making predictions while also counting beeps was too much for the students to handle. It wasn’t. They were equally competent at spotting patterns with or without the note-counting task.

But here’s the catch: when the researchers then followed up by asking more abstract questions about the patterns, the cognitive cost of the multitasking became clear. The students struggled to answer questions about the predictions they’d made in the multitasking environment. They had successfully juggled both tasks in the moment — but they hadn’t learnt anything that they could apply in a different context.

That’s an unnerving discovery. When we are sending email in the middle of a tedious meeting, we may nevertheless feel that we’re taking in what is being said. A student may be confident that neither Snapchat nor the live football is preventing them taking in their revision notes. But the UCLA findings suggest that this feeling of understanding may be an illusion and that, later, we’ll find ourselves unable to remember much, or to apply our knowledge flexibly. So, multitasking can make us forgetful — one more way in which multitaskers are a little bit like drunks.

Early multitaskers

All this is unnerving, given that the modern world makes multitasking almost inescapable. But perhaps we shouldn’t worry too much. Long before multitasking became ubiquitous, it had a long and distinguished history.

In 1958, a young psychologist named Bernice Eiduson embarked on an long-term research project — so long-term, in fact, that Eiduson died before it was completed. Eiduson studied the working methods of 40 scientists, all men. She interviewed them periodically over two decades and put them through various psychological tests. Some of these scientists found their careers fizzling out, while others went on to great success. Four won Nobel Prizes and two others were widely regarded as serious Nobel contenders. Several more were invited to join the National Academy of Sciences.

After Eiduson died, some of her colleagues published an analysis of her work. These colleagues, Robert Root-Bernstein, Maurine Bernstein and Helen Garnier, wanted to understand what determined whether a scientist would have a long productive career, a combination of genius and longevity.

There was no clue in the interviews or the psychological tests. But looking at the early publication record of these scientists — their first 100 published research papers — researchers discovered a pattern: the top scientists were constantly changing the focus of their research.

Over the course of these first 100 papers, the most productive scientists covered five different research areas and moved from one of these topics to another an average of 43 times. They would publish, and change the subject, publish again, and change the subject again. Since most scientific research takes an extended period of time, the subjects must have overlapped. The secret to a long and highly productive scientific career? It’s multitasking.

Charles Darwin thrived on spinning multiple plates. He began his first notebook on “transmutation of species” two decades before The Origin of Species was published. His A Biographical Sketch of an Infant was based on notes made after his son William was born; William was 37 when he published. Darwin spent nearly 20 years working on climbing and insectivorous plants. And Darwin published a learned book on earthworms in 1881, just before his death. He had been working on it for 44 years. When two psychologists, Howard Gruber and Sara Davis, studied Darwin and other celebrated artists and scientists they concluded that such overlapping interests were common.

Another team of psychologists, led by Mihaly Csikszentmihalyi, interviewed almost 100 exceptionally creative people from jazz pianist Oscar Peterson to science writer Stephen Jay Gould to double Nobel laureate, the physicist John Bardeen. Csikszentmihalyi is famous for developing the idea of “flow”, the blissful state of being so absorbed in a challenge that one loses track of time and sets all distractions to one side. Yet every one of Csikszentmihalyi’s interviewees made a practice of keeping several projects bubbling away simultaneously.

Just internet addiction?

If the word “multitasking” can apply to both Darwin and a teenager with a serious Instagram habit, there is probably some benefit in defining our terms. There are at least four different things we might mean when we talk about multitasking. One is genuine multitasking: patting your head while rubbing your stomach; playing the piano and singing; farting while chewing gum. Genuine multitasking is possible, but at least one of the tasks needs to be so practised as to be done without thinking.

Then there’s the challenge of creating a presentation for your boss while also fielding phone calls for your boss and keeping an eye on email in case your boss wants you. This isn’t multitasking in the same sense. A better term is task switching, as our attention flits between the presentation, the telephone and the inbox. A great deal of what we call multitasking is in fact rapid task switching.

Task switching is often confused with a third, quite different activity — the guilty pleasure of disappearing down an unending click-hole of celebrity gossip and social media updates. There is a difference between the person who reads half a page of a journal article, then stops to write some notes about a possible future project, then goes back to the article — and someone who reads half a page of a journal article before clicking on bikini pictures for the rest of the morning. “What we’re often calling multitasking is in fact internet addiction,” says Shelley Carson, a psychologist and author of Your Creative Brain. “It’s a compulsive act, not an act of multitasking.”

A final kind of multitasking isn’t a way of getting things done but simply the condition of having a lot of things to do. The car needs to be taken in for a service. Your tooth is hurting. The nanny can’t pick up the kids from school today. There’s a big sales meeting to prepare for tomorrow, and your tax return is due next week. There are so many things that have to be done, so many responsibilities to attend to. Having a lot of things to do is not the same as doing them all at once. It’s just life. And it is not necessarily a stumbling block to getting things done — as Bernice Eiduson discovered as she tracked scientists on their way to their Nobel Prizes.

The fight for focus

These four practices — multitasking, task switching, getting distracted and managing multiple projects — all fit under the label “multitasking”. This is not just because of a simple linguistic confusion. The versatile networked devices we use tend to blur the distinction, serving us as we move from task to task while also offering an unlimited buffet of distractions. But the different kinds of multitasking are linked in other ways too. In particular, the highly productive practice of having multiple projects invites the less-than-productive habit of rapid task switching.

To see why, consider a story that psychologists like to tell about a restaurant near Berlin University in the 1920s. (It is retold in Willpower, a book by Roy Baumeister and John Tierney.) The story has it that when a large group of academics descended upon the restaurant, the waiter stood and calmly nodded as each new item was added to their complicated order. He wrote nothing down, but when he returned with the food his memory had been flawless. The academics left, still talking about the prodigious feat; but when one of them hurried back to retrieve something he’d left behind, the waiter had no recollection of him. How could the waiter have suddenly become so absent-minded? “Very simple,” he said. “When the order has been completed, I forget it.”

One member of the Berlin school was a young experimental psychologist named Bluma Zeigarnik. Intrigued, she demonstrated that people have a better recollection of uncompleted tasks. This is called the “Zeigarnik effect”: when we leave things unfinished, we can’t quite let go of them mentally. Our subconscious keeps reminding us that the task needs attention.

The Zeigarnik effect may explain the connection between facing multiple responsibilities and indulging in rapid task switching. We flit from task to task to task because we can’t forget about all of the things that we haven’t yet finished. We flit from task to task to task because we’re trying to get the nagging voices in our head to shut up.

Of course, there is much to be said for “focus”. But there is much to be said for copperplate handwriting, too, and for having a butler. The world has moved on. There’s something appealing about the Hemingwrite and the hotel room that will make the internet go away, but also something futile.

It is probably not true that Facebook is all that stands between you and literary greatness. And in most office environments, the Hemingwrite is not the tool that will win you promotion. You are not Ernest Hemingway, and you do not get to simply ignore emails from your colleagues.

If focus is going to have a chance, it’s going to have to fight an asymmetric war. Focus can only survive if it can reach an accommodation with the demands of a multitasking world.

Loops and lists

The word “multitasking” wasn’t applied to humans until the 1990s, but it has been used to describe computers for half a century. According to the Oxford English Dictionary, it was first used in print in 1966, when the magazine Datamation described a computer capable of appearing to perform several operations at the same time.

Just as with humans, computers typically create the illusion of multitasking by switching tasks rapidly. Computers perform the switching more quickly, of course, and they don’t take 20 minutes to get back on track after an interruption.

Nor does a computer fret about what is not being done. While rotating a polygon and sending text to the printer, it feels no guilt that the mouse has been left unchecked for the past 16 milliseconds. The mouse’s time will come. Being a computer means never having to worry about the Zeigarnik effect.

Is there a lesson in this for distractible sacks of flesh like you and me? How can we keep a sense of control despite the incessant guilt of all the things we haven’t finished?

“Whenever you say to someone, ‘I’ll get back to you about that’, you just opened a loop in your brain,” says David Allen. Allen is the author of a cult productivity book called Getting Things Done. “That loop will keep spinning until you put a placeholder in a system you can trust.”

Modern life is always inviting us to open more of those loops. It isn’t necessarily that we have more work to do, but that we have more kinds of work that we ought to be doing at any given moment. Tasks now bleed into each other unforgivingly. Whatever we’re doing, we can’t escape the sense that perhaps we should be doing something else. It’s these overlapping possibilities that take the mental toll.

The principle behind Getting Things Done is simple: close the open loops. The details can become rather involved but the method is straightforward. For every single commitment you’ve made to yourself or to someone else, write down the very next thing you plan to do. Review your lists of next actions frequently enough to give you confidence that you won’t miss anything.

This method has a cult following, and practical experience suggests that many people find it enormously helpful — including me (see below). Only recently, however, did the psychologists E J Masicampo and Roy Baumeister find some academic evidence to explain why people find relief by using David Allen’s system. Masicampo and Baumeister found that you don’t need to complete a task to banish the Zeigarnik effect. Making a specific plan will do just as well. Write down your next action and you quiet that nagging voice at the back of your head. You are outsourcing your anxiety to a piece of paper.

A creative edge?

It is probably a wise idea to leave rapid task switching to the computers. Yet even frenetic flipping between Facebook, email and a document can have some benefits alongside the costs.

The psychologist Shelley Carson and her student Justin Moore recently recruited experimental subjects for a test of rapid task switching. Each subject was given a pair of tasks to do: crack a set of anagrams and read an article from an academic journal. These tasks were presented on a computer screen, and for half of the subjects they were presented sequentially — first solve the anagrams, then read the article. For the other half of the experimental group, the computer switched every two-and-a-half minutes between the anagrams and the journal article, forcing the subjects to change mental gears many times.

Unsurprisingly, task switching slowed the subjects down and scrambled their thinking. They solved fewer anagrams and performed poorly on a test of reading comprehension when forced to refocus every 150 seconds.

But the multitasking treatment did have a benefit. Subjects who had been task switching became more creative. To be specific, their scores on tests of “divergent” thinking improved. Such tests ask subjects to pour out multiple answers to odd questions. They might be asked to think of as many uses as possible for a rolling pin or to list all the consequences they could summon to mind of a world where everyone has three arms. Involuntary multitaskers produced a greater volume and variety of answers, and their answers were more original too.

“It seems that switching back and forth between tasks primed people for creativity,” says Carson, who is an adjunct professor at Harvard. The results of her work with Moore have not yet been published, and one might reasonably object that such tasks are trivial measures of creativity. Carson responds that scores on these laboratory tests of divergent thinking are correlated with substantial creative achievements such as publishing a novel, producing a professional stage show or creating an award-winning piece of visual art. For those who insist that great work can only be achieved through superhuman focus, think long and hard on this discovery.

Carson and colleagues have found an association between significant creative achievement and a trait psychologists term “low latent inhibition”. Latent inhibition is the filter that all mammals have that allows them to tune out apparently irrelevant stimuli. It would be crippling to listen to every conversation in the open-plan office and the hum of the air conditioning, while counting the number of people who walk past the office window. Latent inhibition is what saves us from having to do so. These subconscious filters let us walk through the world without being overwhelmed by all the different stimuli it hurls at us.

And yet people whose filters are a little bit porous have a big creative edge. Think on that, uni-taskers: while you busily try to focus on one thing at a time, the people who struggle to filter out the buzz of the world are being reviewed in The New Yorker.

“You’re letting more information into your cognitive workspace, and that information can be consciously or unconsciously combined,” says Carson. Two other psychologists, Holly White and Priti Shah, found a similar pattern for people suffering from attention deficit hyperactivity disorder (ADHD).

It would be wrong to romanticise potentially disabling conditions such as ADHD. All these studies were conducted on university students, people who had already demonstrated an ability to function well. But their conditions weren’t necessarily trivial — to participate in the White/Shah experiment, students had to have a clinical diagnosis of ADHD, meaning that their condition was troubling enough to prompt them to seek professional help.

It’s surprising to discover that being forced to switch tasks can make us more creative. It may be still more surprising to realise that in an age where we live under the threat of constant distraction, people who are particularly prone to being distracted are flourishing creatively.

Perhaps we shouldn’t be entirely surprised. It’s easier to think outside the box if the box is full of holes. And it’s also easier to think outside the box if you spend a lot of time clambering between different boxes. “The act of switching back and forth can grease the wheels of thought,” says John Kounios, a professor of psychology at Drexel University.

Kounios, who is co-author of The Eureka Factor, suggests that there are at least two other potentially creative mechanisms at play when we switch between tasks. One is that the new task can help us forget bad ideas. When solving a creative problem, it’s easy to become stuck because we think of an incorrect solution but simply can’t stop returning to it. Doing something totally new induces “fixation forgetting”, leaving us free to find the right answer.

Another is “opportunistic assimilation”. This is when the new task prompts us to think of a solution to the old one. The original Eureka moment is an example.

As the story has it, Archimedes was struggling with the task of determining whether a golden wreath truly was made of pure gold without damaging the ornate treasure. The solution was to determine whether the wreath had the same volume as a pure gold ingot with the same mass; this, in turn, could be done by submerging both the wreath and the ingot to see whether they displaced the same volume of water.

This insight, we are told, occurred to Archimedes while he was having a bath and watching the water level rise and fall as he lifted himself in and out. And if solving such a problem while having a bath isn’t multitasking, then what is?

Tim Harford is an FT columnist. His latest book is ‘The Undercover Economist Strikes Back’. Twitter: @TimHarford

Six ways to be a master of multitasking

1. Be mindful

“The ideal situation is to be able to multitask when multitasking is appropriate, and focus when focusing is important,” says psychologist Shelley Carson. Tom Chatfield, author of Live This Book, suggests making two lists, one for activities best done with internet access and one for activities best done offline. Connecting and disconnecting from the internet should be deliberate acts.

2. Write it down

The essence of David Allen’s Getting Things Done is to turn every vague guilty thought into a specific action, to write down all of the actions and to review them regularly. The point, says Allen, is to feel relaxed about what you’re doing — and about what you’ve decided not to do right now — confident that nothing will fall through the cracks.

3. Tame your smartphone

The smartphone is a great servant and a harsh master. Disable needless notifications — most people don’t need to know about incoming tweets and emails. Set up a filing system within your email so that when a message arrives that requires a proper keyboard to answer — ie 50 words or more — you can move that email out of your inbox and place it in a folder where it will be waiting for you when you fire up your computer.

4. Focus in short sprints

The “Pomodoro Technique” — named after a kitchen timer — alternates focusing for 25 minutes and breaking for five minutes, across two-hour sessions. Productivity guru Merlin Mann suggests an “email dash”, where you scan email and deal with urgent matters for a few minutes each hour. Such ideas let you focus intensely while also switching between projects several times a day.

5. Procrastinate to win

If you have several interesting projects on the go, you can procrastinate over one by working on another. (It worked for Charles Darwin.) A change is as good as a rest, they say — and as psychologist John Kounios explains, such task switching can also unlock new ideas.

6. Cross-fertilise

“Creative ideas come to people who are interdisciplinary, working across different organisational units or across many projects,” says author and research psychologist Keith Sawyer. (Appropriately, Sawyer is also a jazz pianist, a former management consultant and a sometime game designer for Atari.) Good ideas often come when your mind makes unexpected connections between different fields.

Tim Harford’s To-Do Lists

David Allen’s Getting Things Done system — or GTD — has reached the status of a religion among some productivity geeks. At its heart, it’s just a fancy to-do list, but it’s more powerful than a regular list because it’s comprehensive, specific and designed to prompt you when you need prompting. Here’s how I make the idea work for me.

Write everything down. I use Google Calendar for appointments and an electronic to-do list called Remember the Milk, plus an ad hoc daily list on paper. The details don’t matter. The principle is never to carry a mental commitment around in your head.

Make the list comprehensive. Mine currently has 151 items on it. (No, I don’t memorise the number. I just counted.)

Keep the list fresh. The system works its anxiety-reducing magic best if you trust your calendar and to-do list to remind you when you need reminding. I spend about 20 minutes once a week reviewing the list to note incoming deadlines and make sure the list is neither missing important commitments nor cluttered with stale projects. Review is vital — the more you trust your list, the more you use it. The more you use it, the more you trust it.

List by context as well as topic. It’s natural to list tasks by topic or project — everything associated with renovating the spare room, for instance, or next year’s annual away-day. I also list them by context (this is easy on an electronic list). Things I can do when on a plane; things I can only do when at the shops; things I need to talk about when I next see my boss.

Be specific about the next action. If you’re just writing down vague reminders, the to-do list will continue to provoke anxiety. Before you write down an ill-formed task, take the 15 seconds required to think about exactly what that task is.

Written for and first published at ft.com.

21 Sep 01:50

Let’s be blunt: criticism works

by Tim Harford
Undercover Economist

‘If Amazon encourages its staff to be straight with each other about what should be fixed, so much the better’

Last month’s Amazon exposé in The New York Times evidently touched a white-collar nerve. Jodi Kantor and David Streitfeld described what might euphemistically be called an “intense” culture at Amazon’s headquarters in a feature article that promptly became the most commented-on story in the newspaper’s website’s history. As Kantor and Streitfeld told it, Amazon reduces grown men to tears and comes down hard on staff whose performance is compromised by distractions such as stillborn children, dying parents or simply having a family. Not for the first time, The Onion was 15 years ahead of the story with a December 2000 headline that bleakly satirised a certain management style: “There’s No ‘My Kid Has Cancer’ In Team.”

Mixed in with the grim anecdotes was a tale of a bracingly honest culture of criticism and self-criticism. (Rival firms, we are told, have been hiring Amazon workers after they’ve quit in exasperation, but are worried that these new hires may have become such aggressive “Amholes” that they won’t fit in anywhere else.)

At Amazon, performance reviews seem alarmingly blunt. One worker’s boss reeled off a litany of unachieved goals and inadequate skills. As the stunned recipient steeled himself to be fired, he was astonished when his superior announced, “Congratulations, you’re being promoted,” and gave him a hug.

It is important to distinguish between a lack of compassion and a lack of tact. It’s astonishing how often we pass up the chance to give or receive useful advice. If Amazon encourages its staff to be straight with each other about what should be fixed, so much the better.

We call workplace comments “feedback”. This is an ironic word to borrow from engineering, because while feedback in a physical system is automatic, with a clear link between cause and effect, feedback in a corporate environment is fraught with emotion and there is rarely a clear link between what was done and what is said about it.

The story of the Amazon worker who thought he was about to be fired is instructive. A list of goals not yet accomplished and skills that need improving is actually useful. Yet we’re so accustomed to receiving uninformative compliments — well done, good job — that a specific list sounds like grounds for dismissal.

Consider the contrast between a corporate manager and a sports coach. The manager usually wants to placate workers and avoid awkward confrontations. As a result, comments will be pleasant but too woolly to be of much use. The sports coach is likely to be far more specific: maintain lane discipline; straighten your wrist; do fewer repetitions with heavier weights. Being positive or negative is beside the point. What matters is concrete advice about how to do better.

A similar problem besets meetings. On the surface these group discussions aim at reaching a good decision but people may care more about getting along. People who like each other may find it harder to have sensible conversations about hard topics.

In the mid-1990s, Brooke Harrington, a sociologist, made a study of Californian investment clubs, where people joined together to research possible stock-market investments, debate their merits and invest as a collective enterprise. (The results were published in a book, Pop Finance.) Harrington found a striking distinction between clubs that brought together friends and those with no such social ties.

The clubs made up of strangers made much better investment decisions and, as a fly on the wall, Harrington could see why. These clubs had open disagreements about which investments to make; tough decisions were put to a vote; people who did shoddy research were called on it. All rather Amazonian. The friendlier clubs had a very different dynamic, because here people were more concerned with staying friends than with making good investments. Making good decisions often requires social awkwardness. People who are confused must be corrected. People who are free-riding must be criticised. Disagreements must be hashed out. The friendly groups often simply postponed hard decisions or passed over good opportunities because they would require someone to say out loud that someone else was wrong.

None of this should be a blanket defence of Amazon’s workplace culture — which if the New York Times exposé is to be believed, sounds dreadful. Nor does it excuse being rude. But the problem is that honest criticism is so rare that it is often misinterpreted as rudeness.

In some contexts, letting politeness trump criticism can be fatal. From the operating theatre to the aeroplane cockpit, skilled professionals are being taught techniques such as “graded assertiveness” — or how to gently but firmly make your boss realise he is about to kill someone by mistake.

Scientists have wrestled with a similar challenge. As the great statistician Ronald Fisher once drily commented, “A scientific career is peculiar . . . its raison d’être is the increase of natural knowledge. Occasionally, therefore, an increase of natural knowledge occurs. But this is tactless, and feelings are hurt . . . it is inevitable that views previously expounded are shown to be either obsolete or false . . . some undoubtedly take it hard.”

Nobody likes to be told that they are wrong. But if there’s one thing worse than someone telling you that you are wrong, it’s no one telling you that you are wrong.

Written for and first published at ft.com.

26 May 01:00

Go Barefoot at the Gym to Get More Out of These Exercises

by Beth Skwarecki on Vitals, shared by Whitson Gordon to Lifehacker

Go Barefoot at the Gym to Get More Out of These Exercises

Running isn’t the only exercise you can do barefoot. Try these other exercises sans shoes for better balance and stronger feet.

Going without shoes lets you work the muscles of your feet and potentially take advantage of more natural movement patterns, although runners have been warned to tread carefully to avoid injury from doing too much barefoot work all at once. With gym exercises, starting out small is probably a good idea too.

We also don’t want you running afoul of gym rules, so either do these at home or make sure your gym is okay with bared feet. Men’s Health suggests doing these moves without shoes:

  • Pushups, to give you an extra stretch in your toes and the soles of your feet. (If you love that stretch, also try the toes pose in yoga.)
  • Deadlifts, to bring you a little bit closer to the ground. If you do lots of deadlifts, you might already own low-soled shoes, but if not, kicking off your sneakers is an easy way to achieve the same thing.
  • Lunges, so you can stabilize yourself with your foot muscles rather than relying on the shape of your shoe.

Read more at Men’s Health on the benefits of going barefoot for these and other exercises.

4 Exercises You Should Always Do Barefoot | Men’s Health

Photo by Pink Sherbet.


Vitals is a new blog from Lifehacker all about health and fitness. Follow us on Twitter here.

26 May 00:51

Egalite Anyone??!!

by Salvador Dali

Maybe we don't even realise this... but the way most societies are structured, and the way they have evolved - the scales of justice and equality have shifted. Education used to be the best equaliser for the have nots. Education 15-30 years ago was generally free and its really up to you to make the best of it. Education was the best way out of poverty, the best way to refine your worth as a person, and to make out a better life for you and your family.

Look at the state of education now. Its hardly 'equal' anymore. I am sure some of the kids from "government schools" can still make it... but seriously, the pendulum has swung substantially to one side. While we can harp on how inept the government has been in shaping our education policy, while some of us can afford to send our kids to private schools ... what else can we do to equalise the situation? Or do you feel the situation does not need to be addressed. If we do nothing, the gap between the haves and have-nots will just widen and widen, and what you get will be more animosity and latent combustible class-conflicts.


As a country, we all should want to 'grow up together' and try not to leave anyone behind. Hence where we can, we should do our bit to somehow equalise the situation. There are plenty of things we can do: go to orphanages and see if they need clusters of computers and/or volunteer tutoring in classwork; go back to your alma mater and see if there is anything the "old boys" can do to better the conditions for students in government schools; if you are rich enough, do like Koon Yew Yin, sponsor a hundred kids education a year; etc...

















25 Apr 13:42

Your Password is Too Damn Short

by Jeff Atwood

I'm a little tired of writing about passwords. But like taxes, email, and pinkeye, they're not going away any time soon. Here's what I know to be true, and backed up by plenty of empirical data:

  • No matter what you tell them, users will always choose simple passwords.

  • No matter what you tell them, users will re-use the same password over and over on multiple devices, apps, and websites. If you are lucky they might use a couple passwords instead of the same one.

What can we do about this as developers?

  • Stop requiring passwords altogether, and let people log in with Google, Facebook, Twitter, Yahoo, or any other valid form of Internet driver's license that you're comfortable supporting. The best password is one you don't have to store.

  • Urge browsers to support automatic, built-in password generation and management. Ideally supported by the OS as well, but this requires cloud storage and everyone on the same page, and that seems most likely to me per-browser. Chrome, at least, is moving in this direction.

  • Nag users at the time of signup when they enter passwords that are …

    • Too short: UY7dFd

    • Lack sufficient entropy: aaaaaaaaa

    • Match common dictionary words: anteaters1

This is commonly done with an ambient password strength meter, which provides real time feedback as you type.

If you can't avoid storing the password – the first two items I listed above are both about avoiding the need for the user to select a 'new' password altogether – then showing an estimation of password strength as the user types is about as good as it gets.

The easiest way to build a safe password is to make it long. All other things being equal, the law of exponential growth means a longer password is a better password. That's why I was always a fan of passphrases, though they are exceptionally painful to enter via touchscreen in our brave new world of mobile – and that is an increasingly critical flaw. But how short is too short?

When we built Discourse, I had to select an absolute minimum password length that we would accept. I chose a default of 8, based on what I knew from my speed hashing research. An eight character password isn't great, but as long as you use a reasonable variety of characters, it should be sufficiently resistant to attack.

By attack, I don't mean an attacker automating a web page or app to repeatedly enter passwords. There is some of this, for extremely common passwords, but that's unlikely to be a practical attack on many sites or apps, as they tend to have rate limits on how often and how rapidly you can try different passwords.

What I mean by attack is a high speed offline attack on the hash of your password, where an attacker gains access to a database of leaked user data. This kind of leak happens all the time. And it will continue to happen forever.

If you're really unlucky, the developers behind that app, service, or website stored the password in plain text. This thankfully doesn't happen too often any more, thanks to education efforts. Progress! But even if the developers did properly store a hash of your password instead of the actual password, you better pray they used a really slow, complex, memory hungry hash algorithm, like bcrypt. And that they selected a high number of iterations. Oops, sorry, that was written in the dark ages of 2010 and is now out of date. I meant to say scrypt. Yeah, scrypt, that's the ticket.

Then we're safe? Right? Let's see.

You might read this and think that a massive cracking array is something that's hard to achieve. I regret to inform you that building an array of, say, 24 consumer grade GPUs that are optimized for speed hashing, is well within the reach of the average law enforcement agency and pretty much any small business that can afford a $40k equipment charge. No need to buy when you can rent – plenty of GPU equipped cloud servers these days. Beyond that, imagine what a motivated nation-state could bring to bear. The mind boggles.

Even if you don't believe me, but you should, the offline fast attack scenario, much easier to achieve, was hardly any better at 37 minutes.

Perhaps you're a skeptic. That's great, me too. What happens when we try a longer random.org password on the massive cracking array?

9 characters 2 minutes
10 characters 2 hours
11 characters 6 days
12 characters 1 year
13 characters 64 years

The random.org generator is "only" uppercase, lowercase, and number. What if we add special characters, to keep Q*Bert happy?

8 characters 1 minute
9 characters 2 hours
10 characters 1 week
11 characters 2 years
12 characters 2 centuries

That's a bit better, but you can't really feel safe until the 12 character mark even with a full complement of uppercase, lowercase, numbers, and special characters.

It's unlikely that massive cracking scenarios will get any slower. While there is definitely a password length where all cracking attempts fall off an exponential cliff that is effectively unsurmountable, these numbers will only get worse over time, not better.

So after all that, here's what I came to tell you, the poor, beleagured user:

Unless your password is at least 12 characters, you are vulnerable.

That should be the minimum password size you use on any service. Generate your password with some kind of offline generator, with diceware, or your own home-grown method of adding words and numbers and characters together – whatever it takes, but make sure your passwords are all at least 12 characters.

Now, to be fair, as I alluded to earlier all of this does depend heavily on the hashing algorithm that was selected. But you have to assume that every password you use will be hashed with the lamest, fastest hash out there. One that is easy for GPUs to calculate. There's a lot of old software and systems out there, and will be for a long, long time.

And for developers:

  1. Pick your new password hash algorithms carefully, and move all your old password hashing systems to much harder to calculate hashes. You need hashes that are specifically designed to be hard to calculate on GPUs, like scrypt.

  2. Even if you pick the "right" hash, you may be vulnerable if your work factor isn't high enough. Matsano recommends the following:

    • scrypt: N=2^14, r=8, p=1

    • bcrypt: cost=11

    • PBKDF2 with SHA256: iterations=86,000

    But those are just guidelines; you have to scale the hashing work to what's available and reasonable on your servers or devices. For example, we had a minor denial of service bug in Discourse where we allowed people to enter up to 20,000 character passwords in the login form, and calculating the hash on that took, uh … several seconds.

Now if you'll excuse me, I need to go change my PayPal password.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
20 Apr 01:04

Cigarettes, damn cigarettes and statistics

by Tim Harford
cvf

Correlation is not causation. Storks do not deliver children but larger houses have more room both for children and for storks... We should respect correlation but it is a clue to a deeper truth, not the end of our investigations.

Undercover Economist

We cannot rely on correlation alone. But insisting on absolute proof of causation is too exacting a standard

It is said that there is a correlation between the number of storks’ nests found on Danish houses and the number of children born in those houses. Could the old story about babies being delivered by storks really be true? No. Correlation is not causation. Storks do not deliver children but larger houses have more room both for children and for storks.

This much-loved statistical anecdote seems less amusing when you consider how it was used in a US Senate committee hearing in 1965. The expert witness giving testimony was arguing that while smoking may be correlated with lung cancer, a causal relationship was unproven and implausible. Pressed on the statistical parallels between storks and cigarettes, he replied that they “seem to me the same”.

The witness’s name was Darrell Huff, a freelance journalist beloved by generations of geeks for his wonderful and hugely successful 1954 book How to Lie with Statistics. His reputation today might be rather different had the proposed sequel made it to print. How to Lie with Smoking Statistics used a variety of stork-style arguments to throw doubt on the connection between smoking and cancer, and it was supported by a grant from the Tobacco Institute. It was never published, for reasons that remain unclear. (The story of Huff’s career as a tobacco consultant was brought to the attention of statisticians in articles by Andrew Gelman in Chance in 2012 and by Alex Reinhart in Significance in 2014.)

Indisputably, smoking causes lung cancer and various other deadly conditions. But the problematic relationship between correlation and causation in general remains an active area of debate and confusion. The “spurious correlations” compiled by Harvard law student Tyler Vigen and displayed on his website (tylervigen.com) should be a warning. Did you realise that consumption of margarine is strongly correlated with the divorce rate in Maine?

We cannot rely on correlation alone, then. But insisting on absolute proof of causation is too exacting a standard (arguably, an impossible one). Between those two extremes, where does the right balance lie between trusting correlations and looking for evidence of causation?

Scientists, economists and statisticians have tended to demand causal explanations for the patterns they see. It’s not enough to know that college graduates earn more money — we want to know whether the college education boosted their earnings, or if they were smart people who would have done well anyway. Merely looking for correlations was not the stuff of rigorous science.

But with the advent of “big data” this argument has started to shift. Large data sets can throw up intriguing correlations that may be good enough for some purposes. (Who cares why price cuts are most effective on a Tuesday? If it’s Tuesday, cut the price.) Andy Haldane, chief economist of the Bank of England, recently argued that economists might want to take mere correlations more seriously. He is not the first big-data enthusiast to say so.

This brings us back to smoking and cancer. When the British epidemiologist Richard Doll first began to suspect the link in the late 1940s, his analysis was based on a mere correlation. The causal mechanism was unclear, as most of the carcinogens in tobacco had not been identified; Doll himself suspected that lung cancer was caused by fumes from tarmac roads, or possibly cars themselves.

Doll’s early work on smoking and cancer with Austin Bradford Hill, published in 1950, was duly criticised in its day as nothing more than a correlation. The great statistician Ronald Fisher repeatedly weighed into the argument in the 1950s, pointing out that it was quite possible that cancer caused smoking — after all, precancerous growths irritated the lung. People might smoke to soothe that irritation. Fisher also observed that some genetic predisposition might cause both lung cancer and a tendency to smoke. (Another statistician, Joseph Berkson, observed that people who were tough enough to resist adverts and peer pressure were also tough enough to resist lung cancer.)

Hill and Doll showed us that correlation should not be dismissed too easily. But they also showed that we shouldn’t give up on the search for causal explanations. The pair painstakingly continued their research, and evidence of a causal association soon mounted.

Hill and Doll took a pragmatic approach in the search for causation. For example, is there a dose-response relationship? Yes: heavy smokers are more likely to suffer from lung cancer. Does the timing make sense? Again, yes: smokers develop cancer long after they begin to smoke. This contradicts Fisher’s alternative hypothesis that people self-medicate with cigarettes in the early stages of lung cancer. Do multiple sources of evidence add up to a coherent picture? Yes: when doctors heard about what Hill and Doll were finding, many of them quit smoking, and it became possible to see that the quitters were at lower risk of lung cancer. We should respect correlation but it is a clue to a deeper truth, not the end of our investigations.

It’s not clear why Huff and Fisher were so fixated on the idea that the growing evidence on smoking was a mere correlation. Both of them were paid as consultants by the tobacco industry and some will believe that the consulting fees caused their scepticism. It seems just as likely that their scepticism caused the consulting fees. We may never know.

Written for and first published at ft.com.

19 Apr 07:06

Must Watch This - Little Big Master

by Salvador Dali
Did not even plan to see this movie but there were a lot of dogs at the pet shop and had to drop my dog off for grooming, so got a couple of hours to kill. Always liked Miriam Yeung and Louis Koo but the synopsis indicated that it was a heartfelt small movie about 5 kids and a kindergarten about to be shut down. Wasn't expecting much but no choice ...

This movie simply was spectacularly written, tight and no nonsense. It was crafted so well that you felt for each character deeply by the time it got only halfway. But boy you do cry ... but those were somehow "good tears", they were tears of empathy, hope,

deep human kindness, all wrapped in a rare level of honesty not seen in most movies.

It has to be the most meaningful movie I have seen for a very very long time. Everyone from 6-106 should watch this. It brings out the best in us, it explores what we all need is to open up and lend a helping hand, and that everyone no matter how small, have their own story to tell.


Amazingly, the wonderful storyline was based closely on real life events. You will see pictures of all the "real people" with the actors playing them during the credits. The kindergarten is still on going today and its enrolment has risen from 5 to nearly 60.

Young ones will benefit enormously from watching the film as they will be asking questions which all parents will gladly answer as they all point towards the goodness in human kind.

A+


08 Apr 01:53

Lobsters, the FTC and the consequence of silence

by Dan McCrum
cvf

If it wants to, the Federal Government will throw you in jail. Thousands of statutes exist, unknowable yet capriciously enforceable by prosectors armed with discretion and minimum sentencing guidelines. Years of prison time for packing lobsters in the wrong sort of box is just one of the most famous cautionary tales of such power. - https://ideatransfuser.wordpress.com/2013/04/25/from-lobster-importing-to-prison-a-story-of-violation/

If it wants to, the Federal Government will throw you in jail. Thousands of statutes exist, unknowable yet capriciously enforceable by prosectors armed with discretion and minimum sentencing guidelines. Years of prison time for packing lobsters in the wrong sort of box is just one of the most famous cautionary tales of such power.

Yet, in half a century since pyramid schemes were identified as a menace, the US government has neglected to pass a specific anti-pyramid scheme statute. If the Federal Trade Commission wants to shut down a company, it has to build a civil case from scratch to show both pyramid behaviour and consumer harm.

So how, why and if the FTC chooses to build such a case will effectively demarcate the law. Which is why, once the Commission’s ongoing investigation of Herbalife is concluded, it is essential the government says something about what does, and does not, constitute a pyramid scheme.

Continue reading: Lobsters, the FTC and the consequence of silence
05 Apr 12:58

Highs and lows of the minimum wage

by Tim Harford
cvf

As with the Equal Pay Act, economically literate commentators feared trouble, and for much the same reason: the minimum wage would destroy jobs and harm those it was intended to help. We would face the tragic situation of employers who would only wish to hire at a low wage, workers who would rather have poorly paid work than no work at all, and the government outlawing the whole affair.

And yet, the minimum wage does not seem to have destroyed many jobs...One explanation of the puzzle is that higher wages may attract more committed workers, with higher morale, better attendance and lower turnover. On this view, the minimum wage pushed employers into doing something they might have been wise to do anyway. To the extent that it imposed net costs on employers, they were small enough to make little difference to their appetite for hiring.

Undercover Economist

‘The lesson of all this is that the economy is complicated and textbook economic logic alone will get us only so far’

In 1970, Labour’s employment secretary Barbara Castle shepherded the Equal Pay Act through parliament, with the promise that women would be paid as much as men when doing equivalent jobs. The political spark for the Act came from a famous strike by women at Ford’s Dagenham plant, and the moral case is self-evident.

The economics, however, looked worrisome. The Financial Times wrote a series of editorials praising “the principle” of equality but nervous about the practicalities. In September 1969, for example, an FT editorial observed that “if the principle of equal pay were enforced too rigorously, employers might often prefer to employ men”; and the day after the Act came into force on December 29 1975, the paper noted a new era “which many women may come to regret”.

The economic logic for these concerns is straightforward. Whether because of prejudice or some real difference in productivity, employers were willing to pay more for men than for women. That inevitably meant that if a new law artificially raised women’s salaries, women would struggle to find work at those higher salaries.

The law certainly did raise women’s salaries. Looking at the simple headline measure of hourly wages, women’s pay has gradually risen over the decades as a percentage of men’s, although it remains lower. Typically, this process of catch-up has been gradual but, between 1970 and 1975, the years when the Equal Pay Act was being introduced, the gap narrowed sharply.

Did this legal push to women’s pay cause joblessness, as some feared? No. Women have steadily made up a larger and larger proportion of working people in the UK, and the Equal Pay Act seems to have no impact on that trend whatsoever. If any effect can be discerned, it is that the proportion of women in the workforce increased slightly faster as the Act was being introduced; perhaps they were attracted by the higher salaries?

The lesson of all this is that the economy is complicated and textbook economic logic alone will get us only so far. The economist Alan Manning recently gave a public lecture at the London School of Economics, where he drew parallels between the Equal Pay Act and the minimum wage, pointing out that in both cases theoretical concerns were later dispelled by events.

The UK minimum wage took effect 16 years ago this week, on April 1 1999. As with the Equal Pay Act, economically literate commentators feared trouble, and for much the same reason: the minimum wage would destroy jobs and harm those it was intended to help. We would face the tragic situation of employers who would only wish to hire at a low wage, workers who would rather have poorly paid work than no work at all, and the government outlawing the whole affair.

And yet, the minimum wage does not seem to have destroyed many jobs — or at least, not in a way that can be discerned by slicing up the aggregate data. (One exception: there is some evidence that in care homes, where large numbers of people are paid the minimum wage, employment has been dented.)

The general trend seems a puzzling suspension of the law of supply and demand. One explanation of the puzzle is that higher wages may attract more committed workers, with higher morale, better attendance and lower turnover. On this view, the minimum wage pushed employers into doing something they might have been wise to do anyway. To the extent that it imposed net costs on employers, they were small enough to make little difference to their appetite for hiring.

An alternative response is that the data are noisy and don’t tell us much, so we should stick to basic economic reasoning. But do we give the data a fair hearing?

A fascinating survey reported in the most recent World Development Report showed World Bank staff some numbers and asked for an interpretation. In some cases, the staff were told that the data referred to the effectiveness of a skin cream; in other cases, they were told that the data were about whether minimum wages reduced poverty.

The same numbers should lead to the same conclusions but the World Bank staff had much more trouble drawing the statistically correct inference when they had been told the data were about minimum wages. It can be hard to set aside our preconceptions.

The principle of the minimum wage, like the principle of equal pay for women, is no longer widely questioned. But the appropriate level of the minimum wage needs to be the subject of continued research. In the UK, the minimum wage is set with advice from the Low Pay Commission, and it has risen faster than both prices and average wages. A recently announced rise, due in October, is well above the rate of inflation. There must be a level that would be counterproductively high; the question is what that level is.

And we should remember that ideological biases affect both sides of the political divide. In response to Alan Manning’s lecture, Nicola Smith of the Trades Union Congress looked forward to more ambition from the Low Pay Commission in raising the minimum wage “in advance of the evidence”, or using “the evidence more creatively”. I think British politics already has more than enough creativity with the evidence.

Written for and first published at ft.com.

08 Feb 12:55

Making a lottery out of the law

by Tim Harford
Undercover Economist

‘The cure for “bad statistics” isn’t “no statistics” — it’s using statistical tools properly’

The chances of winning the UK’s National Lottery are absurdly low — almost 14 million to one against. When you next read that somebody has won the jackpot, should you conclude that he tampered with the draw? Surely not. Yet this line of obviously fallacious reasoning has led to so many shaky convictions that it has acquired a forensic nickname: “the prosecutor’s fallacy”.

Consider the awful case of Sally Clark. After her two sons each died in infancy, she was accused of their murder. The jury was told by an expert witness that the chance of both children in the same family dying of natural causes was 73 million to one against. That number may have weighed heavily on the jury when it convicted Clark in 1999.

As the Royal Statistical Society pointed out after the conviction, a tragic coincidence may well be far more likely than that. The figure of 73 million to one assumes that cot deaths are independent events. Since siblings share genes, and bedrooms too, it is quite possible that both children may be at risk of death for the same (unknown) reason.

A second issue is that probabilities may be sliced up in all sorts of ways. Clark’s sons were said to be at lower risk of cot death because she was a middle-class non-smoker; this factor went into the 73-million-to-one calculation. But they were at higher risk because they were male, and this factor was omitted. Which factors should be included and which should be left out?

The most fundamental error would be to conclude that if the chance of two cot deaths in one household is 73 million to one against, then the probability of Clark’s innocence was also 73 million to one against. The same reasoning could jail every National Lottery winner for fraud.

Lottery wins are rare but they happen, because lots of people play the lottery. Lots of people have babies too, which means that unusual, awful things will sometimes happen to those babies. The court’s job is to weigh up the competing explanations, rather than musing in isolation that one explanation is unlikely. Clark served three years for murder before eventually being acquitted on appeal; she drank herself to death at the age of 42.

Given this dreadful case, one might hope that the legal system would school itself on solid statistical reasoning. Not all judges seem to agree: in 2010, the UK Court of Appeal ruled against the use of Bayes’ Theorem as a tool for evaluating how to put together a collage of evidence.

As an example of Bayes’ Theorem, consider a local man who is stopped at random because he is wearing a distinctive hat beloved of the neighbourhood gang of drug dealers. Ninety-eight per cent of the gang wear the hat but only 5 per cent of the local population do. Only one in 1,000 locals is in the gang. Given only this information, how likely is the man to be a member of the gang? The answer is about 2 per cent. If you randomly stop 1,000 people, you would (on average) stop one gang member and 50 hat-wearing innocents.

We should ask some searching questions about the numbers in my example. Who says that 5 per cent of the local population wear the special hat? What does it really mean to say that the man was stopped “at random”, and do we believe that? The Court of Appeal may have felt it was spurious to put numbers on inherently imprecise judgments; numbers can be deceptive, after all. But the cure for “bad statistics” isn’t “no statistics” — it’s using statistical tools properly.

Professor Colin Aitken, the Royal Statistical Society’s lead man on statistics and the law, comments that Bayes’ Theorem “is just a statement of logic. It’s irrefutable.” It makes as much sense to forbid it as it does to forbid arithmetic.

 . . . 

These statistical missteps aren’t a uniquely British problem. Lucia de Berk, a paediatric nurse, was thought to be the most prolific serial killer in the history of the Netherlands after a cluster of deaths occurred while she was on duty. The court was told that the chance this was a coincidence was 342 million to one against. That’s wrong: statistically, there seems to be nothing conclusive at all about this cluster. (The death toll at the unit in question was actually higher before de Berk started working there.)

De Berk was eventually cleared on appeal after six years behind bars; Richard Gill, a British statistician based in the Netherlands, took a prominent role in the campaign for her release. Professor Gill has now turned his attention to the case of Ben Geen, a British nurse currently serving a 30-year sentence for murdering patients in Banbury, Oxfordshire. In his view, Geen’s case is a “carbon copy” of the de Berk one.

Of course, it is the controversial cases that grab everyone’s attention, so it is difficult to know whether statistical blunders in the courtroom are commonplace or rare, and whether they are decisive or merely part of the cut and thrust of legal argument. But I have some confidence in the following statement: a little bit of statistical education for the legal profession would go a long way.

Written for and first published at ft.com.

04 Feb 06:31

Michael Masters on speculation, oil, and investment

by Izabella Kaminska

Back in May 2008, nobody — especially regulators — had a clue about what was causing crude oil prices to spike to $100-per-barrel-levels, and mostly everyone was inclined to either blame “China” or “speculators” or some combination of the two.

But Michael Masters, a portfolio manager at Masters Capital Management, had a simple proposition. In the Senate committee hearings organised to figure out exactly what was going on, Masters testified that it was his belief that a new class of investor — one he dubbed the passive “index speculator” — had bulldozed his way into the market and distorted the usual price discovery process.

Continue reading: Michael Masters on speculation, oil, and investment
02 Feb 14:49

Four Basic Writing Principles You Can Use in Everyday Life

by Herbert Lui
cvf

Show, Don't Tell - use techniques like description and dialogue. Similarly, when you plan to share an idea, thought, or feeling with someone, think about how you can show it to them.

Simplicity Is Better than Flowery - You'd be more effective communicating your ideas if you present them clearly, one at a time

Read a Lot, Learn from Everything - You can acquire information from the events unfolding in front of you, from conversations, from podcasts, and many other sources. So "read" those carefully as well.

Focus by Learning What Not to Do - Trying to do everything is futile. Instead, you have to learn which parts of your life are essential, and which ones you don't have to put as much time or energy into. Whenever you say yes to something, you're also implicitly saying no to something else—either in the present or in the future.

Show other people your ideas to make stronger impressions. Avoid dressing up your language, and write like how you would talk. Learn from everything and expand your mind. Most importantly, buckle down and focus to get more important things done.

Four Basic Writing Principles You Can Use in Everyday Life

Writing starts way before you put letters to a page. It involves processes like critical thinking, communication, and creativity. Even if writing feels like pulling teeth, you can apply the principles of writing to many facets of your day-to-day life. Here's how.

Show, Don't Tell

Four Basic Writing Principles You Can Use in Everyday Life

Good writers use techniques like description and dialogue to show the reader what characters are thinking and feeling. For example, instead of telling the reader, "Jim was sad," a writer might describe how "Jenny saw Jim crying in the bathroom" or how "Jim walked with his shoulders slouched and head bowed" (Sidebar: I am clearly not a professional novelist).

Similarly, when you plan to share an idea, thought, or feeling with someone, think about how you can show it to them. For example, if you're about to thank someone, show them your gratitude by writing a letter, or a card, or expressing yourself through a gift, in addition to saying, "Thank you."

If you're trying to convince someone of something, even if you can't complete an entire task to express yourself, do a little bit of the work to get started so to make a stronger impression on them. People will take your message more seriously when the evidence is right in front of them.

Simplicity Is Better than Flowery

Four Basic Writing Principles You Can Use in Everyday Life

Put the thesaurus away. Stop looking for synonyms in your word processor. Contrary to what you might think, longer words don't make you sound or look smarter. Author Stephen King writes in his memoir On Writing:

One of the really bad things you can do to your writing is to dress up the vocabulary, looking for long words because you're maybe a little bit ashamed of your short ones. This is like dressing up a household pet in evening clothes. The pet is embarrassed and the person who committed this act of premeditated cuteness should be even more embarrassed.

You've probably read someone's essay before where you cringed or frowned at their excessive language. No matter how well you think you integrate your fancy words into your essay or emails, other people will have similar reactions when you go thesaurus diving.

Keep this in mind as you move through life as well. You'd be more effective communicating your ideas if you present them clearly, one at a time—even for ones that seem more overwhelming, like relationship questions, job interview questions, or anything else. You don't need to reinvent the wheel as you approach each problem.

Read a Lot, Learn from Everything

Four Basic Writing Principles You Can Use in Everyday Life

You can learn something from everyone and everything. Sometimes, it's what not to do (and unfortunately, sometimes you're learning those lessons from yourself). Writer and Nobel Prize laureate William Faulkner suggests:

Read, read, read. Read everything—trash, classics, good and bad, and see how they do it. Just like a carpenter who works as an apprentice and studies the master. Read! You'll absorb it.

Read and explore all types of media in order to refine your perspective. This wider range of ideas also exposes your brain, a connection machine, to new nodes and types of information. You'll become more creative.

You can take this advice both literally and figuratively, as it expands beyond books. Reading is primarily about acquiring information, but you don't just acquire information from books and writing. You can acquire information from the events unfolding in front of you, from conversations, from podcasts, and many other sources. So "read" those carefully as well. Don't avoid small talk and light banter. Talk to as many different people as you like. Invest your time and money to seeing the world.

I used to only read cheerful books, and watch fun TV shows and movies. I didn't think it made sense to invest hours of my life into media that would leave me feeling glum. However, sadness, rage, and anxiety are all parts of the human experience. The events that cause these feelings could happen in your real life at any time. Experiencing these emotions is crucial to getting a better grasp on joy, peace, and excitement.

Focus by Learning What Not to Do

Four Basic Writing Principles You Can Use in Everyday Life

A large part of writing involves learning what not to say, or what to remove instead of what to include. Trying to do everything is futile. Instead, you have to learn which parts of your life are essential, and which ones you don't have to put as much time or energy into. Whenever you say yes to something, you're also implicitly saying no to something else—either in the present or in the future. Make sure that what you're giving up is worth it. Writer and filmmaker Susan Sontag writes in one of her diaries:

There is a great deal that either has to be given up or be taken away from you if you are going to succeed in writing a body of work.

This advice also applies at an individual productivity level—say no to other tasks and focus on one. Author Henry Miller understood the importance of singletasking, and recommends that writers, "Work on one thing at a time until finished."

The thought of your team or office reading your work could paralyze you as you draft up an important email or memo. You're a busy person. Break your hesitance by channeling your inner Kurt Vonnegut, who advises:"Write to please just one person. If you open a window and make love to the world, so to speak, your story will get pneumonia." From a writing context, pretend you're writing to your best friend at work (but be sure to clean up anything too honest when you edit).


Show other people your ideas to make stronger impressions. Avoid dressing up your language, and write like how you would talk. Learn from everything and expand your mind. Most importantly, buckle down and focus to get more important things done.

Photos by Jaime González, Eric Danley, Pedro Ribeiro Simões, Magdalena Roeseler, and Chris Dlugosz.

02 Feb 14:02

This Tutorial Teaches You How to Properly Do a Handstand

by Herbert Lui

This Tutorial Teaches You How to Properly Do a Handstand

Some of us already have enough trouble moving around on our own two feet (don't sweat it, I've got two left feet as well). Nonetheless, if you want to master handstands, learning from this comprehensive guide will accelerate your progress and increase your likelihood of success.

As it turns out, doing a handstand isn't just as straightforward as swinging on your hands and trying to balance. There are various processes that you can work on, including the wrist warm up, strengthening your core, and rebalancing drills (amongst many others). This guide would also be useful for any amateur dancers or gymnasts who are practicing exercises like handstands.

The Most Comprehensive Handstand Tutorial | Antranik.org via Reddit

Photo by Gareth Williams.

31 Jan 15:19

The power of saying no

by Tim Harford
cvf

it’s hard to say “no”. One reason is “hyperbolic discounting”. What this means is that the present moment is exaggerated in our thoughts. To say “yes” is to warm ourselves in a brief glow of immediate gratitude, heedless of the later cost. A psychological tactic to get around this problem is to try to feel the pain of “yes” immediately, rather than at some point to be specified later. “If I had to do this today, would I agree to it?” It’s not a bad rule of thumb, since any future commitment, no matter how far away it might be, will eventually become an imminent problem. There is a far broader economic principle at work in the judicious use of the word “no”. It’s the idea that everything has an opportunity cost. The insight here is that every time we say “yes” to a request, we are also saying “no” to anything else we might accomplish with the time. All those lessons about opportunity cost have taught me that every “no” to a request from an acquaintance is also a “yes” to my family.

Undercover Economist

‘Every time we say yes to a request, we are also saying no to anything else we might accomplish with the time’

Every year I seem to have the same resolution: say “no” more often. Despite my black belt in economics-fu, it’s an endless challenge. But economics does tell us a little about why “no” is such a difficult word, why it’s so important — and how to become better at saying it.

Let’s start with why it’s hard to say “no”. One reason is something we economists, with our love of simple, intuitive language, call “hyperbolic discounting”. What this means is that the present moment is exaggerated in our thoughts. When somebody asks, “Will you volunteer to be school governor?” it is momentarily uncomfortable to refuse, even if it will save much more trouble later. To say “yes” is to warm ourselves in a brief glow of immediate gratitude, heedless of the later cost.

A psychological tactic to get around this problem is to try to feel the pain of “yes” immediately, rather than at some point to be specified later. If only we could feel instantly and viscerally our eventual annoyance at having to keep our promises, we might make fewer foolish promises in the first place.

One trick is to ask, “If I had to do this today, would I agree to it?” It’s not a bad rule of thumb, since any future commitment, no matter how far away it might be, will eventually become an imminent problem.

Here’s a more extreme version of the same principle. Adopt a rule that no new task can be deferred: if accepted, it must be the new priority. Last come, first served. The immediate consequence is that no project may be taken on unless it’s worth dropping everything to work on it.

This is, of course, absurd. Yet there is a bit of mad genius in it, if I do say so myself. Anyone who sticks to the “last come, first served” rule will find their task list bracingly brief and focused.

There is a far broader economic principle at work in the judicious use of the word “no”. It’s the idea that everything has an opportunity cost. The opportunity cost of anything is whatever you had to give up to get it. Opportunity cost is one of those concepts in economics that seem simple but confuse everyone, including trained economists.

Consider the following puzzle, a variant of which was set by Paul J Ferraro and Laura O Taylor to economists at a major academic conference back in 2005. Imagine that you have a free ticket (which you cannot resell) to see Radiohead performing. But, by a staggering coincidence, you could also go to see Lady Gaga — there are tickets on sale for £40. You’d be willing to pay £50 to see Lady Gaga on any given night, and her concert is the best alternative to seeing Radiohead. Assume there are no other costs of seeing either gig. What is the opportunity cost of seeing Radiohead? (a) £0, (b) £10, (c) £40 or (d) £50.

If you’re not sure of your answer, never fear: the correct answer (below), was also the one least favoured by the economists.

However dizzying the idea of opportunity cost may be, it’s something we must wrap our heads around. Will I write a book review? Will I chair a panel discussion on a topic of interest? Will I give a talk to some students? In isolation, these are perfectly reasonable requests. But viewing them in isolation is a mistake: it is only when viewed through the lens of opportunity cost that the stakes become clearer.

Will I write a book review and thus not write a chapter of my own book? Will I give a talk to some students, and therefore not read a bedtime story to my son? Will I participate in the panel discussion instead of having a conversation over dinner with my wife?

The insight here is that every time we say “yes” to a request, we are also saying “no” to anything else we might accomplish with the time. It pays to take a moment to think about what those things might be.

Saying “no” is still awkward and takes some determination. Nobody wants to turn down requests for help. But there is one final trick that those of us with family commitments can try. All those lessons about opportunity cost have taught me that every “no” to a request from an acquaintance is also a “yes” to my family. Yes, I will be home for bedtime. Yes, I will switch off my computer at the weekend.

And so from time to time, as I compose my apologetic “sorry, no”, I type my wife’s email address in the “bcc” field. The awkward email to the stranger is also a tiny little love letter to her.

Answer: Going to see Lady Gaga would cost £40 but you’re willing to pay £50 any time to see her; therefore the net benefit of seeing Gaga is £10. If you use your free Radiohead ticket instead, you’re giving up that benefit, so the opportunity cost of seeing Radiohead is £10.

Written for and first published at ft.com.

20 Jan 10:39

Not your usual oil-price decline effect

by Izabella Kaminska
cvf

the drop in commodities may be supply/demand agonistic and more reflective of a global dollar liquidity shortage than anything else. Which means the only way the problem can be alleviated is if the US continues to subsidise global growth by means of providing it with the cheap liquidity it needs to keep growing. Alternatively, of course, for commodity producers to start accepting a currency that these emerging markets have easy access to.

Yup. Analysts and economists still can’t decide whether the fall in oil prices is net positive or net negative for the global economy.

Unfortunately for the net positive camp, it looks increasingly like global demand and growth figures are beginning to side with the negativity team.

Indeed, the longer the oil price stays low, the more it looks like global stimulus hopes were overdone due to poor understanding of financial feedback loops in the commodity space.

So what’s behind the anomaly? How did a whole school of economists get this potentially so wrong?

Continue reading: Not your usual oil-price decline effect
19 Jan 01:01

How much is a (micro)life worth?

by Tim Harford
cvf

what monetary value can we put on human life? Courts have assigned damages after fatal accidents by looking at the economic output the dead person would otherwise have produced. But this suggests that the life of a retired person has no value. It captures the loss of livelihood, not the loss of life. Schelling and Carlson asked what we are willing to pay to reduce the risk of premature death by spending money on road safety or hospitals. The value of a life was replaced with the value of a statistical life.

Undercover Economist

‘Travelling 28 miles on a motorbike is four micromorts; cycling the same distance is just over one micromort’

The Rand Corporation was established in 1948 as an independent research arm of the US Air Force, itself newly independent and in its pomp as the wielder of the US nuclear arsenal. Rand’s early years were spent wrestling with the mathematics of Armageddon, and it has long struggled to shake off its reputation as the inspiration for Dr Strangelove.

Yet Rand’s most controversial research topic was its very first study — and its crime was to offend not the public but the top brass at the Air Force. Edwin Paxson, one of its top mathematicians, had been asked by the Air Force to think about the problem of an optimal first strike against the Soviet Union. How could the United States annihilate the Soviet Union for the smallest possible expenditure?

Paxson’s research was technically impressive, using cutting-edge analytical techniques. (The project is described in histories by David Jardini and by Fred Kaplan, and in a new article in the Journal of Economic Perspectives by Spencer Banzhaf.) His conclusion, published in 1950, was that rather than commissioning expensive turbojet bombers, the US should build large numbers of cheap propeller aircraft, most of which would carry no nuclear payload and would be little more than decoys. Not knowing which planes held atomic weapons, the Soviet defences would be overwhelmed by sheer numbers.

This conclusion infuriated the Air Force. No doubt this was partly because they viewed old-fashioned propeller aircraft as beneath their dignity. But the key offence was this: Paxson’s cost-benefit analysis gave no weight to the lives of air crew. Ten thousand pilots could be wiped out and it would make no difference to Paxson’s arithmetic. Under fire from senior officers, who had been wartime pilots themselves, Rand quickly adopted a more humble tone. It also diversified its funding by researching non-military topics.

Yet Paxson’s omission is understandable. A sensible strategist must weigh the costs and benefits of different tactics — but once one accepts the need for value for money in military strategy, what monetary value can we put on human life?

One possible approach to the problem is to value people according to some economic proxy — for example, the Air Force might value the cost of training new pilots. Courts have assigned damages after fatal accidents by looking at the economic output the dead person would otherwise have produced. But this suggests that the life of a retired person has no value. It captures the loss of livelihood, not the loss of life.

In the 1960s, a new approach emerged, most famously in Thomas Schelling’s 1968 essay “The Life You Save May Be Your Own”. Schelling, who much later won the Nobel Memorial Prize in Economic Sciences, had spent some productive time working at Rand. His student Jack Carlson was a former Air Force pilot. Carlson and Schelling found a way to finesse the treacherous question. As Schelling wrote: “It is not the worth of human life that I shall discuss but of ‘life-saving’, of preventing death. And it is not a particular death, but a statistical death.”

Rather than asking “What is the value of human life?” Schelling and Carlson asked what we are willing to pay to reduce the risk of premature death by spending money on road safety or hospitals. The value of a life was replaced with the value of a statistical life.

There is good sense in this bait-and-switch. The life of a named individual defies monetary valuation. It is special. Yet the prospect of spending money to widen and straighten a road and therefore fractionally reduce the chance that any one of thousands of road users will die — that feels like a more legitimate field for economic exploration.

For those who have not read Schelling’s elegant essay, simply inserting a qualifier into the phrase “the value of a [statistical] life” will not persuade. This presents a serious public-relations problem. From time to time it emerges that government bureaucrats have been valuing human life — outrageous! (The going rate for an individual life in the US is about $7m.)

As Trudy Ann Cameron, a professor of economics at the University of Oregon, comments, it would be helpful for economists to be able to report their research on the benefits of environmental or health policies “in a way that neither confuses nor offends non-economists”.

Here’s a possible solution: use microlives. A microlife is one millionth of an adult lifespan — about half an hour — and a micromort is a one-in-a-million chance of dying.

Sir David Spiegelhalter, my favourite risk communication expert, reckons that going under general anaesthetic is 10 micromorts. Travelling 28 miles on a motorbike is four micromorts; cycling the same distance is just over one micromort. The National Health Service in the UK uses analysis that prices a microlife at around £1.70; the UK Department for Transport will spend £1.60 to prevent a micromort. In a world where life-and-death trade-offs must be made, and should be faced squarely, this is a less horrible way to think about it all. A human life is a special thing; a microlife, not so much.

As Ronald Howard, the decision analysis expert who invented the micromort, put it back in 1984: “Although this change is cosmetic only, we should remember the size of the cosmetic industry.”

Written for and first published at ft.com.

13 Jan 01:31

A capital contango, and why oil storage economics may be dead

by Izabella Kaminska

$80 oil, $70 oil, $60 oil, $50 oil and counting… If you suspect the structure of the oil market has fundamentally changed, you may be on to something.

There was a time when all you needed to balance oversupply in the oil market was the ability, and the will, to store oil when no-one else wanted to.

That ability, undoubtedly, was linked to capital access. For a bank, it meant being able to pass the cost of storing surplus stock over to commodity-oriented passive investors and institutions happy to fund the exposure. For a trading intermediary, that generally meant having good relations with a bank which could provide the capital and financing to store oil, something the bank would do (for a fee) because of its ability to access institutional capital markets and its reluctance to physically store oil itself.

Continue reading: A capital contango, and why oil storage economics may be dead
12 Jan 01:10

Why more and more means less

by Tim Harford
cvf

The first mistake was simple status quo bias — a tendency to let things stay the way they are. When you’re trying to clear stuff out of the house, it’s natural to think about whether to throw something away. Perhaps that’s the wrong question, because it places too high a barrier on disposal. Status quo bias means that most of your stuff stays because you can’t think of a good reason to get rid of it.

Kondo turns things around. For her, the status quo is that every item you own will be thrown away unless you can think of a compelling reason why it should stay. This mental reversal turns status quo bias, paradoxically, into a force for change.

My second error was a failure to appreciate the logic of diminishing returns. The first pair of trousers is essential; the second is enormously useful. It is not at all clear why anyone would want a 10th or 11th pair. It’s good to have a saucepan but the fifth saucepan will rarely be used. I love books but I already own more than I will be able to read for the rest of my life, so some of them can surely go.

The trick to appreciating diminishing returns is to gather all the similar stuff together at once. Once every single book I owned was sitting in a colossal pile on my living-room floor, the absurdity of retaining them all became far easier to appreciate.


A third mistake was not fully appreciating the opportunity cost of possessions. There’s a financial cost in paying for a storage locker or buying a larger house or simply buying more bookshelves. But there’s also the cost of being unable to appreciate what you have because it’s stuck at the bottom of a crate underneath a bunch of other things you have.

Undercover Economist

Status quo bias means that most of your stuff stays because you can’t think of a good reason to get rid of it

Now that Christmas is a brandy-soaked memory, there comes the difficult business of packing away all the gifts that haven’t been broken or eaten. Double-stack the books on the bookshelf, squeeze the woolly sweater into the sock drawer and don’t even think about trying to keep the toys tidy.

Before you blame the decadence of western civilisation for the difficulty of this seasonal clutter, note that the boom in consumer spending each December isn’t new. According to Joel Waldfogel’s brief yet comprehensive book Scroogenomics, Americans were happily splurging at Christmas three or four generations ago with much the same vigour as they do today, relative to the size of the US economy. Nor are Americans exceptional in the lavishness of their Christmas celebrations: Waldfogel reveals that many European countries enjoy an even greater blowout.

No, the trouble with all this clutter, it seems to me, is simple economics. We can afford to buy more and more Christmassy stuff — clothes in particular have been getting cheaper for many years — but the one thing that isn’t getting any cheaper is room to put all the extra stuff in. This is true for most people and it is particularly true for FT readers, who are more likely than most to live in places — New York, London, Hong Kong — where the price of a home can make almost anything look cheap in comparison.

I used to think I knew the answer to this problem: cleverly designed storage space. I am no longer so convinced. Elegantly organised cupboards simply postpone the day of reckoning. The house looks neat for a while but eventually the sheer volume of possessions can overwhelm any storage solution.

Perhaps it’s time for a new approach, and the book that’s been rocking my world for the past few weeks is Marie Kondo’s The Life-Changing Magic of Tidying. If you follow Kondo’s ideas faithfully you’re likely to be driving three-quarters of your possessions to landfill, at which point most of your storage problems should evaporate. The Harford family have all succumbed to the Marie Kondo bug in a big way and are suddenly seeing areas of the floor in our house that we had quite forgotten existed.

Kondo espouses some strange ideas. She unpacks her handbag and tidies away the contents every evening when she returns home. She firmly believes that socks need rest. She advocates saying “thank you” to possessions that are about to be discarded.

Yet despite all this oddness, what really struck me is that Kondo is an intuitive economist. I realised that I had been committing some cognitive blunders, enough to embarrass any self-respecting economist. These errors explained why my house was so full of possessions.

The first mistake was simple status quo bias — a tendency to let things stay the way they are. When you’re trying to clear stuff out of the house, it’s natural to think about whether to throw something away. Perhaps that’s the wrong question, because it places too high a barrier on disposal. Status quo bias means that most of your stuff stays because you can’t think of a good reason to get rid of it.

Kondo turns things around. For her, the status quo is that every item you own will be thrown away unless you can think of a compelling reason why it should stay. This mental reversal turns status quo bias, paradoxically, into a force for change.

My second error was a failure to appreciate the logic of diminishing returns. The first pair of trousers is essential; the second is enormously useful. It is not at all clear why anyone would want a 10th or 11th pair. It’s good to have a saucepan but the fifth saucepan will rarely be used. I love books but I already own more than I will be able to read for the rest of my life, so some of them can surely go.

The trick to appreciating diminishing returns is to gather all the similar stuff together at once. Once every single book I owned was sitting in a colossal pile on my living-room floor, the absurdity of retaining them all became far easier to appreciate.

 . . . 

A third mistake was not fully appreciating the opportunity cost of possessions. There’s a financial cost in paying for a storage locker or buying a larger house or simply buying more bookshelves. But there’s also the cost of being unable to appreciate what you have because it’s stuck at the bottom of a crate underneath a bunch of other things you have.

All this may seem strange talk from an economist, because economics is often associated with a kind of crass materialism. The field no doubt has flaws but materialism isn’t one of them. If anything, the blind spot in classical economics is that the last word on what consumers value is what consumers choose.

Behavioural economics, famously, has a different view — it says that we do make systematic mistakes. But until reading Kondo’s book, I didn’t realise that in every overstuffed bookshelf and cluttered cupboard those mistakes were manifesting themselves.

Written for and first published at ft.com.

08 Dec 13:47

Learn from the losers

by Tim Harford
cvf

It reminds us that what we see around us is not representative of the world; it is biased in systematic ways. Normally, when we talk of bias we think of a conscious ideological slant. But many biases are simple and unconscious. It’s natural to look at life’s winners – often they become winners in the first place because they’re interesting to look at. If we don’t look at life’s losers too, we may end up putting our time, money, attention or even armour plating in entirely the wrong place.

Undercover Economist

Kickended is important. It reminds us that the world is biased in systematic ways

Can there be an easier way to raise some cash than through Kickstarter? The crowdfunding website enjoyed a breakthrough moment in 2012 when the Pebble, an early smartwatch, raised over $10m. But then a few months ago, a mere picnic cooler raised an extraordinary $13m. Admittedly, the Coolest cooler is the Swiss army knife of cool boxes. It has a built-in USB charger, cocktail blender and loudspeakers. The thundering herd of financial backers for this project made it the biggest Kickstarter campaign to date, as well as being a sure sign that end times are upon us.

And who could forget this summer’s Kickstarter appeal from a fellow by the name of Zack “Danger” Brown? Brown turned to Kickstarter for $10 to make some potato salad; and he raised $55,492 in what must be one of history’s most lucrative expressions of hipster irony.

I’m sure I’m not the only person to ponder launching an exciting project on Kickstarter before settling back to count the money. Dean Augustin may have had the same idea back in 2011; he sought $12,000 to produce a documentary about John F Kennedy. Jonathan Reiter’s “BizzFit” looked to raise $35,000 to create an algorithmic matching service for employers and employees. This October, two brothers in Syracuse, New York, launched a Kickstarter campaign in the hope of being paid $400 to film themselves terrifying their neighbours at Halloween. These disparate campaigns have one thing in common: they received not a single penny of support. Not one of these people was able to persuade friends, colleagues or even their parents to kick in so much as a cent.

My inspiration for these tales of Kickstarter failure is Silvio Lorusso, an artist and designer based in Venice. Lorusso’s website, Kickended, searches Kickstarter for all the projects that have received absolutely no funding. (There are plenty: about 10 per cent of Kickstarter projects go nowhere at all, and only 40 per cent raise enough money to hit their funding targets.)

Kickended performs an important service. It reminds us that what we see around us is not representative of the world; it is biased in systematic ways. Normally, when we talk of bias we think of a conscious ideological slant. But many biases are simple and unconscious. I have never read a media report or blog post about a typical, representative Kickstarter campaign – but I heard a lot about the Pebble watch, the Coolest cooler and potato salad. If I didn’t know better, I might form unrealistic expectations about what running a Kickstarter campaign might achieve.

This isn’t just about Kickstarter. Such bias is everywhere. Most of the books people read are bestsellers – but most books are not bestsellers. And most book projects do not become books at all. There’s a similar story to tell about music, films and business ventures in general.

Academic papers are more likely to be published if they find new, interesting and positive results. If an individual researcher retained only the striking data points, we would call it fraud. But when an academic community as a whole retains only the striking results, we call it “publication bias” and we have tremendous difficulty in preventing it. Its impact on our understanding of the truth may be no less serious.

Now let’s think about the fact that the average London bus has only 17 people riding on it. How could that possibly be? Whenever I get on a bus, it’s packed. But consider a bus that runs into London with 68 people at rush hour, then makes three journeys empty. Every single passenger has witnessed a crammed bus, but the average occupancy was 17. Nobody has ever been a passenger on a bus with no passengers but such buses exist. Most people ride the trains when they are full and go to the shops when they are busy. A restaurant may seem popular to its typical customers because it is buzzing when they are there; to the owners and staff, things may look very different.

. . .

In 1943, the American statistician Abraham Wald was asked to advise the US air force on how to reinforce their planes. Only a limited weight of armour plating was feasible, and the proposal on the table was to reinforce the wings, the centre of the fuselage, and the tail. Why? Because bombers were returning from missions riddled with bullet holes in those areas.

Wald explained that this would be a mistake. What the air force had discovered was that when planes were hit in the wings, tail or central fuselage, they made it home. Where, asked Wald, were the planes that had been hit in other areas? They never returned. Wald suggested reinforcing the planes wherever the surviving planes had been unscathed instead.

It’s natural to look at life’s winners – often they become winners in the first place because they’re interesting to look at. That’s why Kickended gives us an important lesson. If we don’t look at life’s losers too, we may end up putting our time, money, attention or even armour plating in entirely the wrong place.

Written for and first published at ft.com.

02 Dec 05:59

Nine hard-won lessons about money and investing

by maoxian
cvf

You are probably a bad stock picker
No one cares about your money as much as you do
Wall Street is not your friend
Think about working for equity vs. salary
If you’re investing, prefer index funds
Prefer Vanguard over almost anyone else
You probably don’t need a “assets under management” financial advisor

You are probably a bad stock picker [BOUGHT CSCO IN 2000] No one cares about your money as much as you do [EXCEPT THE SKIMMERS] Wall Street is not your friend [USED STOCK SALESMEN] Think about working for equity vs. salary [LOTTERY TICKETS] If you’re investing, prefer index funds [VANGUARD] Prefer credit unions over banks [FEWER FEES] Prefer Vanguard over almost anyone else [SEE ABOVE] You probably don’t need a “assets under management” financial advisor [NOT PROBABLY DON'T, DEFINITELY DON'T] Consider municipal bonds [ESP. WHEN YOU HAVE LOTS OF MONEY] Bonus tip: tax loss harvesting Bonus tip: prefer donor-advised funds to foundations
02 Dec 05:58

Nine surprisingly simple tips from 20 experts about how to lose weight and keep it off - Vox

by maoxian
cvf

1) There really, truly is no one "best diet"
2) People who lose weight are good at tracking — what they eat and how much they weigh
3) People who lose weight identify their barriers and motivations
4) Diets often fail because of unreasonable expectations
5) People who lose weight know how many calories they're consuming — and burning
6) There are ways to hack your environment for health
7) Exercise is surprisingly unhelpful for weight loss
8) Weight loss medications aren't very useful. Neither are "metabolism boosting" supplements.
9) Forget about "the last 10 pounds"

1) There really, truly is no one "best diet" CORRECT, EVERYONE IS DIFFERENT 2) People who lose weight are good at tracking — what they eat and how much they weigh 3) People who lose weight identify their barriers and motivations 4) Diets often fail because of unreasonable expectations 5) People who lose weight know how many calories they're consuming — and burning 6) There are ways to hack your environment for health 7) Exercise is surprisingly unhelpful for weight loss 8) Weight loss medications aren't very useful. Neither are "metabolism boosting" supplements. 9) Forget about "the last 10 pounds"
01 Dec 00:45

Why a house-price bubble means trouble

by Tim Harford
cvf

To see the problem, contrast today’s low-inflation economies with the high inflation of the 1970s and 1980s. Back then, paying off your mortgage was a sprint: a few years during which prices and wages were increasing in double digits, while you struggled with mortgage rates of 10 per cent and more. After five years of that, inflation had eroded the value of the debt and mortgage repayments shrank dramatically in real terms.

Today, a mortgage is a marathon. Interest rates are low, so repayments seem affordable. Yet with inflation low and wages stagnant, they’ll never become more affordable. Low inflation means that a 30-year mortgage really is a 30-year mortgage rather than five years of hell followed by an extended payment holiday. The previous generation’s rules of thumb no longer apply.

Undercover Economist

A housing boom is the economic equivalent of a tapeworm infection

Buying a house is not just a big deal, it’s the biggest. Marriage and children may bring more happiness – or misery, if you’re unlucky – but few of us will ever sign a bigger cheque than the one that buys that big pile of bricks, mortar and dry rot.

It would be nice to report that buyers and sellers are paragons of rationality, and the housing market itself a well-oiled machine that makes a sterling contribution to the working of the broader economy. None of that is true. House buyers are delusional, the housing market is broken and a housing boom is the economic equivalent of a tapeworm infection.

As a sample of the madness, consider the popular concept of “affordability”. This idea is pushed by the UK’s Financial Conduct Authority and seems simple common sense: affordability asks whether potential buyers have enough income to meet their mortgage repayments. That question is reasonable, of course – but it is only a first step, because it ignores inflation.

To see the problem, contrast today’s low-inflation economies with the high inflation of the 1970s and 1980s. Back then, paying off your mortgage was a sprint: a few years during which prices and wages were increasing in double digits, while you struggled with mortgage rates of 10 per cent and more. After five years of that, inflation had eroded the value of the debt and mortgage repayments shrank dramatically in real terms.

Today, a mortgage is a marathon. Interest rates are low, so repayments seem affordable. Yet with inflation low and wages stagnant, they’ll never become more affordable. Low inflation means that a 30-year mortgage really is a 30-year mortgage rather than five years of hell followed by an extended payment holiday. The previous generation’s rules of thumb no longer apply.

Because you are a sophisticated reader of the Financial Times you have, no doubt, figured all this out for yourself. Most house buyers have not. Nor are they being warned. I checked a couple of the most prominent online “affordability” calculators. Inflation simply wasn’t mentioned, even though in the long run it will affect affordability more than anything else.

This isn’t the only behavioural oddity when it comes to housing markets. Another problem is what psychologists call “loss aversion” – a disproportionate anxiety about losing money relative to an arbitrary baseline. I’ve written before about a study of the Boston housing crash two decades ago, conducted by David Genesove and Christopher Mayer. They found that people who bought early and saw prices rise and then fall were realistic in the price they demanded when selling up. People who had bought late and risked losing money tended to make aggressive price demands and failed to find buyers. Rather than feeling they had lost the game, they preferred not to play at all.

The housing market also interacts with the wider economy in strange ways. A study by Indraneel Chakraborty, Itay Goldstein and Andrew MacKinlay concludes that booming housing markets attract bankers like jam attracts flies, sucking money away from commercial and industrial loans. Why back a company when you can lend somebody half a million to buy a house that is rapidly appreciating in value? Housing booms therefore mean less investment by companies.

. . .

House prices have even driven the most famous economic finding of recent years: Thomas Piketty’s conclusion (in joint work with Gabriel Zucman) that “capital is back” in developed economies. Piketty and Zucman have found that relative to income, the total value of capital such as farmland, factories, office buildings and housing is returning to the dizzy levels of the late 19th century.

But as Piketty and Zucman point out, this trend is almost entirely thanks to a boom in the price of houses. Much depends, then, on whether the boom in house prices is a sentiment-driven bubble or reflects some real shift in value. One way to shed light on this question is to ask whether rents in developed countries have boomed in the same way as prices. They haven’t: research by Etienne Wasmer and three of his colleagues at Sciences Po shows that if we measure the value of houses using rents, there’s no boom in the capital stock.

The housing market then, is prone to bubbles and bouts of greed and denial, is shaped by financial rules of thumb that no longer apply, and sucks the life out of the economy. It even muddies the waters of the great economic debate of our time, about the economic significance of capital.

One final question, then: is it all a bubble? That is too deep a question for me but there is an intriguing new study by three German economists, Katharina Knoll, Moritz Schularick and Thomas Steger. They have constructed house-price indices over 14 developed economies since 1870. The pattern is striking: about 50 years ago, real prices started to climb inexorably and at an increasing rate. If this is a bubble, it’s been inflating for two generations.

At least dinner-party guests across London will continue to have something to bore each other about. Not that anybody will be able to afford a dining room.

Written for and first published at ft.com.

26 Nov 06:14

Baltia: the oldest, newest, airline that isn’t

by Dan McCrum

In the first of what may be an occasional series on the wonderful world of stock promotion, meet Baltia, which describes itself as “America’s newest airline”.

Airline may be stretching the term, however. The company has one aircraft, an ageing Boeing 747 sitting in a Michigan hangar.

The newness is also suspect, given that the company has been in an almost perpetual state of preparation to fly for 25 years. It was first organised in New York on “August 24, 1989 to provide air transportation to Russia and, the then, Soviet Union countries”. It hasn’t sold a ticket since, but it has sold a lot of stock.

Continue reading: Baltia: the oldest, newest, airline that isn’t
17 Nov 02:38

How to see into the future

by Tim Harford
cvf

First, some basic training in probabilistic reasoning helps to produce better forecasts. Second, teams of good forecasters produce better results than good forecasters working alone. Third, actively open-minded people prosper as forecasters. Our predictions are about the future only in the most superficial way. They are really advertisements, conversation pieces, declarations of tribal loyalty or as statements of profound conviction about the logical structure of the world. “When my information changes, I alter my conclusions. What do you do, sir?”

Highlights

Billions of dollars are spent on experts who claim they can forecast what’s around the corner, in business, finance and economics. Most of them get it wrong. Now a groundbreaking study has unlocked the secret: it IS possible to predict the future – and a new breed of ‘superforecasters’ knows how to do it

Irving Fisher was once the most famous economist in the world. Some would say he was the greatest economist who ever lived. “Anywhere from a decade to two generations ahead of his time,” opined the first Nobel laureate economist Ragnar Frisch, in the late 1940s, more than half a century after Fisher’s genius first lit up his subject. But while Fisher’s approach to economics is firmly embedded in the modern discipline, many of those who remember him now know just one thing about him: that two weeks before the great Wall Street crash of 1929, Fisher announced, “Stocks have reached what looks like a permanently high plateau.”

In the 1920s, Fisher had two great rivals. One was a British academic: John Maynard Keynes, a rising star and Fisher’s equal as an economic theorist and policy adviser. The other was a commercial competitor, an American like Fisher. Roger Babson was a serial entrepreneur with no serious academic credentials, inspired to sell economic forecasts by the banking crisis of 1907. As Babson and Fisher locked horns over the following quarter-century, they laid the foundations of the modern economic forecasting industry.

Fisher’s rivals fared better than he did. Babson foretold the crash and made a fortune, enough to endow the well-respected Babson College. Keynes was caught out by the crisis but recovered and became rich anyway. Fisher died in poverty, ruined by the failure of his forecasts.

If Fisher and Babson could see the modern forecasting industry, it would have astonished them in its scale, range and hyperactivity. In his acerbic book The Fortune Sellers, former consultant William Sherden reckoned in 1998 that forecasting was a $200bn industry – $300bn in today’s terms – and the bulk of the money was being made in business, economic and financial forecasting.

It is true that forecasting now seems ubiquitous. Data analysts forecast demand for new products, or the impact of a discount or special offer; scenario planners (I used to be one) produce broad-based narratives with the aim of provoking fresh thinking; nowcasters look at Twitter or Google to track epidemics, actual or metaphorical, in real time; intelligence agencies look for clues about where the next geopolitical crisis will emerge; and banks, finance ministries, consultants and international agencies release regular prophecies covering dozens, even hundreds, of macroeconomic variables.

Real breakthroughs have been achieved in certain areas, especially where rich datasets have become available – for example, weather forecasting, online retailing and supply-chain management. Yet when it comes to the headline-grabbing business of geopolitical or macroeconomic forecasting, it is not clear that we are any better at the fundamental task that the industry claims to fulfil – seeing into the future.

So why is forecasting so difficult – and is there hope for improvement? And why did Babson and Keynes prosper while Fisher suffered? What did they understand that Fisher, for all his prodigious talents, did not?

In 1987, a young Canadian-born psychologist, Philip Tetlock, planted a time bomb under the forecasting industry that would not explode for 18 years. Tetlock had been trying to figure out what, if anything, the social sciences could contribute to the fundamental problem of the day, which was preventing a nuclear apocalypse. He soon found himself frustrated: frustrated by the fact that the leading political scientists, Sovietologists, historians and policy wonks took such contradictory positions about the state of the cold war; frustrated by their refusal to change their minds in the face of contradictory evidence; and frustrated by the many ways in which even failed forecasts could be justified. “I was nearly right but fortunately it was Gorbachev rather than some neo-Stalinist who took over the reins.” “I made the right mistake: far more dangerous to underestimate the Soviet threat than overestimate it.” Or, of course, the get-out for all failed stock market forecasts, “Only my timing was wrong.”

Tetlock’s response was patient, painstaking and quietly brilliant. He began to collect forecasts from almost 300 experts, eventually accumulating 27,500. The main focus was on politics and geopolitics, with a selection of questions from other areas such as economics thrown in. Tetlock sought clearly defined questions, enabling him with the benefit of hindsight to pronounce each forecast right or wrong. Then Tetlock simply waited while the results rolled in – for 18 years.

Tetlock published his conclusions in 2005, in a subtle and scholarly book, Expert Political Judgment. He found that his experts were terrible forecasters. This was true in both the simple sense that the forecasts failed to materialise and in the deeper sense that the experts had little idea of how confident they should be in making forecasts in different contexts. It is easier to make forecasts about the territorial integrity of Canada than about the territorial integrity of Syria but, beyond the most obvious cases, the experts Tetlock consulted failed to distinguish the Canadas from the Syrias.

Adding to the appeal of this tale of expert hubris, Tetlock found that the most famous experts fared somewhat worse than those outside the media spotlight. Other than that, the humiliation was evenly distributed. Regardless of political ideology, profession and academic training, experts failed to see into the future.

Most people, hearing about Tetlock’s research, simply conclude that either the world is too complex to forecast, or that experts are too stupid to forecast it, or both. Tetlock himself refused to embrace cynicism so easily. He wanted to leave open the possibility that even for these intractable human questions of macroeconomics and geopolitics, a forecasting approach might exist that would bear fruit.

. . .

In 2013, on the auspicious date of April 1, I received an email from Tetlock inviting me to join what he described as “a major new research programme funded in part by Intelligence Advanced Research Projects Activity, an agency within the US intelligence community.”

The core of the programme, which had been running since 2011, was a collection of quantifiable forecasts much like Tetlock’s long-running study. The forecasts would be of economic and geopolitical events, “real and pressing matters of the sort that concern the intelligence community – whether Greece will default, whether there will be a military strike on Iran, etc”. These forecasts took the form of a tournament with thousands of contestants; it is now at the start of its fourth and final annual season.

“You would simply log on to a website,” Tetlock’s email continued, “give your best judgment about matters you may be following anyway, and update that judgment if and when you feel it should be. When time passes and forecasts are judged, you could compare your results with those of others.”

I elected not to participate but 20,000 others have embraced the idea. Some could reasonably be described as having some professional standing, with experience in intelligence analysis, think-tanks or academia. Others are pure amateurs. Tetlock and two other psychologists, Don Moore and Barbara Mellers, have been running experiments with the co-operation of this army of volunteers. (Mellers and Tetlock are married.) Some were given training in how to turn knowledge about the world into a probabilistic forecast; some were assembled into teams; some were given information about other forecasts while others operated in isolation. The entire exercise was given the name of the Good Judgment Project, and the aim was to find better ways to see into the future.

The early years of the forecasting tournament have, wrote Tetlock, “already yielded exciting results”.

A first insight is that even brief training works: a 20-minute course about how to put a probability on a forecast, correcting for well-known biases, provides lasting improvements to performance. This might seem extraordinary – and the benefits were surprisingly large – but even experienced geopolitical seers tend to have expertise in a subject, such as Europe’s economies or Chinese foreign policy, rather than training in the task of forecasting itself.

“For people with the right talents or the right tactics, it is possible to see into the future after all”

A second insight is that teamwork helps. When the project assembled the most successful forecasters into teams who were able to discuss and argue, they produced better predictions.

But ultimately one might expect the same basic finding as always: that forecasting events is basically impossible. Wrong. To connoisseurs of the frailties of futurology, the results of the Good Judgment Project are quite astonishing. Forecasting is possible, and some people – call them “superforecasters”– can predict geopolitical events with an accuracy far outstripping chance. The superforecasters have been able to sustain and even improve their performance.

The cynics were too hasty: for people with the right talents or the right tactics, it is possible to see into the future after all.

Roger Babson, Irving Fisher’s competitor, would always have claimed as much. A serial entrepreneur, Babson made his fortune selling economic forecasts alongside information about business conditions. In 1920, the Babson Statistical Organization had 12,000 subscribers and revenue of $1.35m – almost $16m in today’s money.

“After Babson, the forecaster was an instantly recognisable figure in American business,” writes Walter Friedman, the author of Fortune Tellers, a history of Babson, Fisher and other early economic forecasters. Babson certainly understood how to sell himself and his services. He advertised heavily and wrote prolifically. He gave a complimentary subscription to Thomas Edison, hoping for a celebrity endorsement. After contracting tuberculosis, Babson turned his management of the disease into an inspirational business story. He even employed stonecutters to carve inspirational slogans into large rocks in Massachusetts (the “Babson Boulders” are still there).

On September 5 1929, Babson made a speech at a business conference in Wellesley, Massachusetts. He predicted trouble: “Sooner or later a crash is coming which will take in the leading stocks and cause a decline of from 60 to 80 points in the Dow-Jones barometer.” This would have been a fall of around 20 per cent.

So famous had Babson become that his warning was briefly a self-fulfilling prophecy. When the news tickers of New York reported Babson’s comments at around 2pm, the markets erupted into what The New York Times described as “a storm of selling”. Shares lurched down by 3 per cent. This became known as the “Babson break”.

The next day, shares bounced back and Babson, for a few weeks, appeared ridiculous. On October 29, the great crash began, and within a fortnight the market had fallen almost 50 per cent. By then, Babson had an advertisement in the New York Times pointing out, reasonably, that “Babson clients were prepared”. Subway cars were decorated with the slogan, “Be Right with Babson”. For Babson, his forecasting triumph was a great opportunity to sell more subscriptions.

But his true skill was marketing, not forecasting. His key product, the “Babson chart”, looked scientific and was inspired by the discoveries of Isaac Newton, his idol. The Babson chart operated on the Newtonian assumption that any economic expansion would be matched by an equal and opposite contraction. But for all its apparent sophistication, the Babson chart offered a simple and usually contrarian message.

“Babson offered an up-arrow or a down-arrow. People loved that,” says Walter Friedman. Whether or not Babson’s forecasts were accurate was not a matter that seemed to concern many people. When he was right, he advertised the fact heavily. When he was wrong, few noticed. And Babson had indeed been wrong for many years during the long boom of the 1920s. People taking his advice would have missed out on lucrative opportunities to invest. That simply didn’t matter: his services were popular, and his most spectacularly successful prophecy was also his most famous.

Babson’s triumph suggests an important lesson: commercial success as a forecaster has little to do with whether you are any good at seeing into the future. No doubt it helped his case when his forecasts were correct but nobody gathered systematic information about how accurate he was. The Babson Statistical Organization compiled business and economic indicators that were, in all probability, of substantial value in their own right. Babson’s prognostications were the peacock’s plumage; their effect was simply to attract attention to the services his company provided.

. . .

When Barbara Mellers, Don Moore and Philip Tetlock established the Good Judgment Project, the basic principle was to collect specific predictions about the future and then check to see if they came true. That is not the world Roger Babson inhabited and neither does it describe the task of modern pundits.

When we talk about the future, we often aren’t talking about the future at all but about the problems of today. A newspaper columnist who offers a view on the future of North Korea, or the European Union, is trying to catch the eye, support an argument, or convey in a couple of sentences a worldview that would otherwise be impossibly unwieldy to explain. A talking head in a TV studio offers predictions by way of making conversation. A government analyst or corporate planner may be trying to justify earlier decisions, engaging in bureaucratic defensiveness. And many election forecasts are simple acts of cheerleading for one side or the other.

“Some people – call them ‘superforecasters’– can predict geopolitical events with an accuracy far outstripping chance”

Unlike the predictions collected by the Good Judgment Project, many forecasts are vague enough in their details to allow the mistaken seer off the hook. Even if it was possible to pronounce that a forecast had come true or not, only in a few hotly disputed cases would anybody bother to check.

All this suggests that among the various strategies employed by the superforecasters of the Good Judgment Project, the most basic explanation of their success is that they have the single uncompromised objective of seeing into the future – and this is rare. They receive continual feedback about the success and failure of every forecast, and there are no points for radicalism, originality, boldness, conventional pieties, contrarianism or wit. The project manager of the Good Judgment Project, Terry Murray, says simply, “The only thing that matters is the right answer.”

I asked Murray for her tips on how to be a good forecaster. Her reply was, “Keep score.”

. . .

An intriguing footnote to Philip Tetlock’s original humbling of the experts was that the forecasters who did best were what Tetlock calls “foxes” rather than “hedgehogs”. He used the term to refer to a particular style of thinking: broad rather than deep, intuitive rather than logical, self-critical rather than assured, and ad hoc rather than systematic. The “foxy” thinking style is now much in vogue. Nate Silver, the data journalist most famous for his successful forecasts of US elections, adopted the fox as the mascot of his website as a symbol of “a pluralistic approach”.

The trouble is that Tetlock’s original foxes weren’t actually very good at forecasting. They were merely less awful than the hedgehogs, who deployed a methodical, logical train of thought that proved useless for predicting world affairs. That world, apparently, is too complex for any single logical framework to encompass.

More recent research by the Good Judgment Project investigators leaves foxes and hedgehogs behind but develops this idea that personality matters. Barbara Mellers told me that the thinking style most associated with making better forecasts was something psychologists call “actively open-minded thinking”. A questionnaire to diagnose this trait invites people to rate their agreement or disagreement with statements such as, “Changing your mind is a sign of weakness.” The project found that successful forecasters aren’t afraid to change their minds, are happy to seek out conflicting views and are comfortable with the notion that fresh evidence might force them to abandon an old view of the world and embrace something new.

Which brings us to the strange, sad story of Irving Fisher and John Maynard Keynes. The two men had much in common: both giants in the field of economics; both best-selling authors; both, alas, enthusiastic and prominent eugenicists. Both had immense charisma as public speakers.

Fisher and Keynes also shared a fascination with financial markets, and a conviction that their expertise in macroeconomics and in economic statistics should lead to success as an investor. Both of them, ultimately, were wrong about this. The stock market crashes of 1929 – in September in the UK and late October in the US – caught each of them by surprise, and both lost heavily.

Yet Keynes is remembered today as a successful investor. This is not unreasonable. A study by David Chambers and Elroy Dimson, two financial economists, concluded that Keynes’s track record over a quarter century running the discretionary portfolio of King’s College Cambridge was excellent, outperforming market benchmarks by an average of six percentage points a year, an impressive margin.

This wasn’t because Keynes was a great economic forecaster. His original approach had been predicated on timing the business cycle, moving into and out of different investment classes depending on which way the economy itself was moving. This investment strategy was not a success, and after several years Keynes’s portfolio was almost 20 per cent behind the market as a whole.

The secret to Keynes’s eventual profits is that he changed his approach. He abandoned macroeconomic forecasting entirely. Instead, he sought out well-managed companies with strong dividend yields, and held on to them for the long term. This approach is now associated with Warren Buffett, who quotes Keynes’s investment maxims with approval. But the key insight is that the strategy does not require macroeconomic predictions. Keynes, the most influential macroeconomist in history, realised not only that such forecasts were beyond his skill but that they were unnecessary.

Irving Fisher’s mistake was not that his forecasts were any worse than Keynes’s but that he depended on them to be right, and they weren’t. Fisher’s investments were leveraged by the use of borrowed money. This magnified his gains during the boom, his confidence, and then his losses in the crash.

But there is more to Fisher’s undoing than leverage. His pre-crash gains were large enough that he could easily have cut his losses and lived comfortably. Instead, he was convinced the market would turn again. He made several comments about how the crash was “largely psychological”, or “panic”, and how recovery was imminent. It was not.

One of Fisher’s major investments was in Remington Rand – he was on the stationery company’s board after selling them his “Index Visible” invention, a type of Rolodex. The share price tells the story: $58 before the crash, $28 by 1930. Fisher topped up his investments – and the price soon dropped to $1.

Fisher became deeper and deeper in debt to the taxman and to his brokers. Towards the end of his life, he was a marginalised figure living alone in modest circumstances, an easy target for scam artists. Sylvia Nasar writes in Grand Pursuit, a history of economic thought, “His optimism, overconfidence and stubbornness betrayed him.”

. . .

So what is the secret of looking into the future? Initial results from the Good Judgment Project suggest the following approaches. First, some basic training in probabilistic reasoning helps to produce better forecasts. Second, teams of good forecasters produce better results than good forecasters working alone. Third, actively open-minded people prosper as forecasters.

But the Good Judgment Project also hints at why so many experts are such terrible forecasters. It’s not so much that they lack training, teamwork and open-mindedness – although some of these qualities are in shorter supply than others. It’s that most forecasters aren’t actually seriously and single-mindedly trying to see into the future. If they were, they’d keep score and try to improve their predictions based on past errors. They don’t.

“Successful forecasters aren’t afraid to change their minds and are comfortable with the notion that fresh evidence might mean abandoning an old view”

This is because our predictions are about the future only in the most superficial way. They are really advertisements, conversation pieces, declarations of tribal loyalty – or, as with Irving Fisher, statements of profound conviction about the logical structure of the world. As Roger Babson explained, not without sympathy, Fisher had failed because “he thinks the world is ruled by figures instead of feelings, or by theories instead of styles”.

Poor Fisher was trapped by his own logic, his unrelenting optimism and his repeated public declarations that stocks would recover. And he was bankrupted by an investment strategy in which he could not afford to be wrong.

Babson was perhaps wrong as often as he was right – nobody was keeping track closely enough to be sure either way – but that did not stop him making a fortune. And Keynes prospered when he moved to an investment strategy in which forecasts simply did not matter much.

Fisher once declared that “the sagacious businessman is constantly forecasting”. But Keynes famously wrote of long-term forecasts, “About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.”

Perhaps even more famous is a remark often attributed to Keynes. “When my information changes, I alter my conclusions. What do you do, sir?”

If only he had taught that lesson to Irving Fisher.

Also published at ft.com.

17 Nov 02:36

A passport to privilege

by Tim Harford
cvf

Being a citizen of the US, the EU or Japan is an extraordinary economic privilege, one of a dramatically different scale than in the 19th century. Migration – the single easiest way for poor people to improve their life chances

Undercover Economist

Class matters far less than it used to in the 19th century. Citizenship matters far more

I’ve been a lucky boy. I could start with the “boy” fact. We men enjoy all sorts of privileges, many of them quite subtle these days, but well worth having. I’m white. I’m an Oxford graduate and I am the son of Oxbridge graduates. All those are things that I have in common with my fellow columnist Simon Kuper, who recently admitted that he didn’t feel he’d earned his vantage point “on the lower slopes of the establishment”.

I don’t feel able to comment objectively on that, although we could ask another colleague, Gillian Tett. She’s female and – in a particularly cruel twist – she wasn’t educated at Oxford but at Cambridge. That’s real diversity right there.

All these accidents of birth are important. But there’s a more important one: citizenship. Gillian, Simon and I are all British citizens. Financially speaking, this is a greater privilege than all the others combined.

Imagine lining up everyone in the world from the poorest to the richest, each standing beside a pile of money that represents his or her annual income. The world is a very unequal place: those in the top 1 per cent have vastly more than those in the bottom 1 per cent – you need about $35,000 after taxes to make that cut-off and be one of the 70 million richest people in the world. If that seems low, it’s $140,000 after taxes for a family of four – and it is also about 100 times more than the world’s poorest people have.

What determines who is at the richer end of that curve is, mostly, living in a rich country. Branko Milanovic, a visiting presidential professor at City University New York and author of The Haves and the Have-Nots, calculates that about 80 per cent of global inequality is the result of inequality between rich nations and poor nations. Only 20 per cent is the result of inequality between rich and poor within nations. The Oxford thing matters, of course. But what matters much more is that I was born in England rather than Bangladesh or Uganda. (Just to complicate matters, Simon Kuper was born in Uganda. He may refer to himself as “default man” but his life defies easy categorisation.)

That might seem obvious but it’s often ignored in the conversations we have about inequality. And things used to be very different. In 1820, the UK had about three times the per capita income of countries such as China and India, and perhaps four times that of the poorest countries. The gap between rich countries and the rest has since grown. Today the US has about five times the per capita income of China, 10 times that of India and 50 times that of the poorest countries. (These gaps could be made to look even bigger by not adjusting for lower prices in China and India.) Being a citizen of the US, the EU or Japan is an extraordinary economic privilege, one of a dramatically different scale than in the 19th century.

Privilege back then used to be far more about class than nationality. Consider the early 19th century world of Jane Austen’s Pride and Prejudice. Elizabeth Bennet’s financial future depends totally on her social position and, therefore, if and whom she marries. Elizabeth’s family’s income is £430 per capita. She can increase that more than tenfold by marrying Mr Darcy and snagging half of his £10,000 a year (this income, by the way, put Mr Darcy in the top 0.1 per cent of earners). But if her father dies before she marries, Elizabeth may end up with £40 a year, still twice the average income in England.

Milanovic shows that when we swap in data from 2004, all the gaps shrink dramatically. Mr Darcy’s income as one of the 0.1 per cent is £400,000; Elizabeth Bennet’s fallback is £23,000 a year. Marriage in the early 19th century would have increased her income more than 100 times; in the early 21st century, the ratio has shrunk to 17 times.

This is a curious state of affairs. Class matters far less than it used to in the 19th century. Citizenship matters far more. Yet when we worry about inequality, it’s not citizenship that obsesses us. Thomas Piketty’s famous book, Capital in the 21st Century, consciously echoes Karl Marx. Click over to the “Top Incomes Database”, a wonderful resource produced by Piketty, Tony Atkinson and others, and you’ll need to specify which country you’d like to analyse. The entire project accepts the nation state as the unit of analysis.

Meanwhile, many people want to limit migration – the single easiest way for poor people to improve their life chances – and view growth in India and China not as dramatic progress in reducing both poverty and global inequality, but as a sinister development.

It would be unfair to say that Simon Kuper and Thomas Piketty have missed the point. Domestic inequality does matter. It matters because we have political institutions capable of addressing it. It matters because it’s obvious from day to day. And it matters because over the past few decades domestic inequality has started to grow again, just as global inequality has started to shrink.

But as I check off my list of privileges, I won’t forget the biggest of them all: my passport.

Also published at ft.com.

24 Oct 10:59

The Benefits of Pessimism

by Thorin Klosowski
cvf

Explanatory optimism: This style of optimism links negative events to external causes that will get better over time.
Explanatory pessimism: Links bad things to their own faults or external causes that won't ever change.
Strategic optimism: This style of optimism doesn't worry about a potentially stressful event and assumes things will just work out. If it doesn't, it wasn't their fault.
Strategic pessimism: This style of pessimism uses strategies to lower expectations and decrease anxiety by thinking through all the negative outcomes and planning for them.
Optimism bias: We've talked about the optimism bias before. It's essentially the tendency to think that you're better at something than everyone else and that good things are more likely to happen to you.
Pessimistic bias: This is the tendency to think that you're worse at things that other people and you tend to expect few good things to happen to you.

The Benefits of Pessimism

Generally speaking, most of us try to be optimistic whenever possible because it makes life a little easier to cope with if you assume everything will work out. Pessimism has its benefits though, and The Wall Street Journal highlights a few of them.

There's a more than just one type of optimist or pessimist. Here's a quick breakdown of a few different types, keeping in mind that most of us tend to do a little bit of each of these:

  • Explanatory optimism: This style of optimism links negative events to external causes that will get better over time.
  • Explanatory pessimism: Links bad things to their own faults or external causes that won't ever change.
  • Strategic optimism: This style of optimism doesn't worry about a potentially stressful event and assumes things will just work out. If it doesn't, it wasn't their fault.
  • Strategic pessimism: This style of pessimism uses strategies to lower expectations and decrease anxiety by thinking through all the negative outcomes and planning for them.
  • Optimism bias: We've talked about the optimism bias before. It's essentially the tendency to think that you're better at something than everyone else and that good things are more likely to happen to you.
  • Pessimistic bias: This is the tendency to think that you're worse at things that other people and you tend to expect few good things to happen to you.

Pessimism sounds like a bummer, but it has its own unique set of benefits. The Wall Street Journal explains:

"Those who are defensively pessimistic about their future may be more likely to invest in preparatory or precautionary measures, whereas we expect that optimists will not be thinking about those things," said Dr. Lang, who noted the study controlled for factors such as health and finances, but didn't prove causality...

A study, published last year in the Journal of Neuropsychiatry & Clinical Neurosciences, evaluated the brain response of 16 older adults when processing fearful faces. People with greater optimism had reduced activity in the parts of the brain that process emotional stimuli. "Being less bothered by stresses can help in coping," said Dr. Jeste, who led the study. "On the other hand, a nonchalant attitude to dangers can leave the person poorly prepared to deal with a risky situation when it arises..."

Optimism can be a disadvantage in stressful conditions. A 2011 study involving 250 couples in the Journal of Personality and Social Psychology found that overly optimistic people coped worse with stress.

Most of the above linked studies show correlation rather than causation, but the point is less about the stats and more about how the line between pessimism and optimism isn't as simple as we'd like to think. It's good to strike a balance between the two, but being pessimistic now and again certainly isn't as bad a thing as people make it out to be. Just don't go too far.

The Perfect Dose of Pessimism | The Wall Street Journal

Photo by Kalyan Chakravarthy.

24 Oct 10:50

Set Meaningful Goals with These Five Principles

by Patrick Allan
cvf

Clarity: Your goals need to be explicitly clear so you know exactly what you're trying to achieve, and know how to measure your progress.
Challenge: Make sure your goal is challenging enough to keep you interested, but not so challenging you lose confidence.
Commitment: Your goal should feel achievable so you know you won't give up, and find ways to remind yourself why you should work hard to keep moving forward.
Feedback: Find ways to receive feedback on what you've done, and analyze your progress and accomplishments so you can adjust the difficulty, if need be.
Task Complexity: If your goal is stressing you out because it's too complex, break your big goal down into smaller sub-goals.

Sometimes the only thing you need to get motivated is a clear goal with challenging aspects. Mind Tools explains how to set these types of goals with five basic principles.

These five principles, developed from Dr. Edwin Locke and Dr. Gary Latham's theory of task motivation and incentives, make your goals feel drawing and worthwhile:

  1. Clarity: Your goals need to be explicitly clear so you know exactly what you're trying to achieve, and know how to measure your progress.
  2. Challenge: Make sure your goal is challenging enough to keep you interested, but not so challenging you lose confidence.
  3. Commitment: Your goal should feel achievable so you know you won't give up, and find ways to remind yourself why you should work hard to keep moving forward.
  4. Feedback: Find ways to receive feedback on what you've done, and analyze your progress and accomplishments so you can adjust the difficulty, if need be.
  5. Task Complexity: If your goal is stressing you out because it's too complex, break your big goal down into smaller sub-goals.

With these simple principles you'll keep yourself in check while you move forward. It's easier to overwhelm yourself than you think, but you also need to strike a balance and make sure your goals are challenging enough to keep you hooked.

Locke's Goal-Setting Theory | Mind Tools