Articles published in 2015

We should decide what the state should provide and how generously, writes Tim Harford

It has never been easier to find little jobs for little payments. If you are being paid through Amazon’s Mechanical Turk to tag people’s photos or being hired to put up shelves via TaskRabbit, who needs a real job? For enthusiasts, these micro-jobs mean sticking two fingers up to the Man and rejecting wage-slavery in favour of freedom. For pessimists, they are precarious ways to earn a living that offer no pension or health insurance. Welcome to the “gig economy”, a phrase that evokes both the romantic ideals and the grinding poverty of life as a journeyman musician.

The app-based gig economy is still small. Perhaps one in 200 American workers rely on it for their main source of income; nobody is really sure. Yet it seems likely to grow, and, as it grows, so will a question: does the way we link social protections to jobs make sense?

Details vary but most advanced countries have a list of goodies that must be provided by employers rather than the government or the individual. In the UK a full-time worker is entitled to 28 days of paid leave. In the US the default provider of health insurance is your employer. In many countries, employees cannot be sacked without long notice periods and a decent pension is the preserve of people with a decent job. As for freelancers, they may enjoy flexibility and independence and sometimes even a good living — but as far as social protections go, they are on their own.

It is easy to understand the politics of this: pensions, healthcare and paid holidays are expensive, and asking employers to pick up the bill obscures their true cost. But the emergence of companies such as Uber is changing the calculus. Are Uber drivers employees or not?

Uber maintains that they are not. That seems defensible: a driver can switch the app on or off at any time, or work for a competitor such as Lyft on a whim. Few employees who acted in this way would be employed for very long.

Then again, does a driver who puts in 60 or 70 hours a week providing Uber-assigned rides according to Uber-determined rules and rates not deserve some sort of security? Some authorities think so: the company has lost a number of rulings in California as judges and arbitrators have found that, in certain cases, Uber drivers are employees.

Such judgments are likely to vary from case to case and place to place, and the uncertainty helps nobody bar the lawyers. Alan Krueger, former chairman of President Barack Obama’s Council of Economic Advisers, draws a parallel with the emergence of the workers’ compensation system a century ago. Sensible rules were agreed, he says, once lawsuits over industrial accidents became expensive and unpredictable.

But what should the new rules be? Mr Krueger’s approach is to adapt the status quo by extending some employment benefits to gig economy workers. He and his co-author, Seth Harris, recently proposed a third category of “independent workers”, neither pure freelancers nor pure employees. They receive “all the benefits that employees get”, Mr Krueger told me, “except for the ones that don’t make sense”.

As the global economy heals, a brave new world is emerging for workers in which more temporary jobs are being created — especially for the young

For example, if Uber drivers enjoyed the status of independent workers, they could form or join a union, and be protected under anti-discrimination laws. Uber, for its part, might offer pensions, health insurance and other products that its drivers could find attractive without fear this would lead the courts to rule that it was an employer. But independent workers would not receive paid holiday or protection from dismissal.

The Harris-Krueger proposal is based on the idea that the current package of employment rights in the US is attractive, and that America would be a better place if it was available as widely as possible. In the eurozone, where double-digit unemployment seems to be customary, it is hard to see how most protections could be applied to independent workers — and harder still to see why that would be a progressive step.

So here is a far more radical approach: we should end the policy of trying to offload the welfare state to corporations. It is a policy that hides the costs of these benefits, and ensures that they are unevenly distributed. Instead we should take a hard look at that list of goodies: healthcare, pensions, income for people who are not working. Then we should decide what the state should provide and how generously. To my mind, there is a strong argument that the state should provide all of these things, to everyone, at a very basic level. What the state will not provide, individuals must pay for themselves — or seek employers who provide these benefits as an attraction rather than a legal obligation. Call it libertarianism with a safety net.

No doubt this is just an economists’ pipe dream. Even the far tamer Harris- Krueger ideas seem unlikely to gain political traction any time soon. That is a shame. While traditional jobs suit most of us, the gig economy is perfect for some people and some circumstances. It would be a shame if our welfare state and labour laws failed to catch up.

‘Scrooge didn’t waste his money on extravagances for people whose desires he didn’t really understand’

Ebenezer Scrooge is underrated. Literature’s most notorious misanthrope gets no respect from anyone. He’s a miser, a bully and a sociopath. Only with the most strenuous pleading from three supernatural mentors does he embrace the spirit of Christmas and, in so doing, join the human race. Dickens’s story is viewed as a journey of redemption; I am not so sure.

In his original, miserly form, Scrooge actually gives us much to admire. He was a model of inadvertent benevolence. He earned vast sums and avoided spending so much as a farthing if he could help it. The economic implication of this? Regardless of Scrooge’s motives, because he spent little, everyone else enjoyed more, as surely as if Scrooge had divided his fortune and sent a few coins to everyone in the country. As the economist Steven Landsburg once wrote: “There is nobody more generous than the miser — the man who could deplete the world’s resources but chooses not to.”

This isn’t an intuitive proposition but it is true. Scrooge reminds me of Bill Drummond and Jimmy Cauty, formerly of successful dance band The KLF, who in the summer of 1994 filmed themselves burning 20,000 £50 notes — £1m — on an island in the Inner Hebrides. People who wouldn’t have batted an eyelid if Drummond and Cauty had blown the cash on fast cars and drugs were outraged at the waste. As the Ghost of Christmas Yet To Come might have pointed out, the money could have been spent on a worthy cause. On a chat show, in front of a jeering audience, Drummond explained that “burning that money doesn’t mean there’s any less loaves of bread in the world, any less apples, any less anything. The only thing that’s less, is a pile of paper.”

Drummond was quite right. He had a claim on £1m worth of goods and services and by burning the money, he didn’t destroy those goods and services — he merely relinquished his claim and let others enjoy them instead. The likely economic effect is that everything in the country became a tiny bit cheaper. If the Bank of England had worried about the (minuscule) fall in the money supply, it could have printed replacement banknotes for a couple of grand.

Scrooge’s self-denial had a similar effect: he could have splashed his money around buying houses and sweets and anything else Victorian London might offer but, in doing so, he would have denied those pleasures to others. In a deep recession, one might be concerned that Scrooge was failing to support aggregate demand but in normal www.aldaorg.net economic times the effect of his skinflintery was to ensure that everyone else was able to enjoy a little more.

Ah, but we might claim: it is the thought that counts, and Scrooge thought of nothing but workhouses and humbug and himself. True. But are the rest of us really any better? We engage in a seasonal fit of generosity but generosity is not the same as empathy — as thinking deeply about what someone else might want.

Loyal readers will know that 22 years ago, Joel Waldfogel, an economist, wrote an article titled The Deadweight Loss of Christmas, in which he quantified something we all instinctively know: a lot of the presents that people give and receive aren’t terribly well chosen. We spend money on things that people don’t really like — and thereby we waste energy, material resources and labour that could have been far better deployed making something people did want.

More recent research by psychologists — notably Gabrielle Adams and Francis Flynn of Stanford, and Harvard’s Francesca Gino — has revealed a startling lack of self-awareness in our gift giving. A few of their results:

? Gift givers believe that spontaneous gifts are as welcome as those on a wish list, while wish list gifts seem charmless and impersonal. Recipients feel otherwise — they have no problem being given something from a list, and often lament the poor choices when people venture away from it.

? Gift givers think more expensive presents are appreciated more yet gift recipients don’t care about the expense either way.

It is hard for us to grasp the discrepancy between how we see the world when giving gifts and when receiving them. Recipients may appreciate cash or presents from a list and not fuss too much about expensive gifts; gift givers, in contrast, imagine that the ideal present is an expensive surprise. It isn’t. All this suggests we should probably be spending less on presents, and thinking a lot more about the presents we do buy.

Which brings us back to Scrooge himself. When he finally did decide to embrace the conventional spirit of Christmas, he didn’t waste his money on demonstrative extravagances for people whose desires he didn’t really understand. Instead, he gave three superb gifts. First, a prize turkey that he knew — thanks to a ghostly premonition — was much needed by the Cratchit family. Second, the gift of his time and attention, playing games and making merry with his nephew. Finally, he gave Bob Cratchit the greatest Christmas gift of all: a pay rise.

‘There are two or three male undergraduate economists for every female undergraduate economist in the US. That is not good’

The world’s most powerful economist is a woman — but chair of the Federal Reserve Janet Yellen doesn’t have a great deal of female company at the economic top table. Christine Lagarde runs the International Monetary Fund, it is true — although she is a lawyer. But the president of the European Central Bank has never been a woman, nor the president of the German Bundesbank, nor the governor of the Bank of England. The US Treasury secretary has never been a woman, nor has the UK chancellor of the exchequer, nor the president of the World Bank.

To some extent this reflects the well-known fact that women are under-represented in positions of power. But beyond that it suggests that economics itself is a curiously male-dominated discipline. There has still only been one female winner of the Nobel Memorial Prize in Economic Sciences: Elinor Ostrom in 2009 (she shared the prize with Oliver E Williamson), and she was not a mainstream economist herself.

Quite why economics is so testosterone-laden is unclear. The situation at the top of the profession now is partly a reflection of the state of economics education decades ago. Ostrom’s desire to study economics was set back by the fact that she was discouraged from studying mathematics at school because she was a girl. We cannot travel back in time to give her a more enlightened maths teacher.

In the younger echelons of the profession there is more cause for cheer. The John Bates Clark medal, for example, is awarded to economists based in America under the age of 40; it is a prestigious award and in many cases a precursor to the Nobel Prize. Until 2007 it had been an exclusively male preserve but it has since been won by three women.

Another hopeful development is the blossoming of psychological realism in economics, in the form of behavioural economics. Behavioural economics doesn’t have all the answers but it is certainly asking some good questions. Since psychology is as popular among female undergraduates as economics is unpopular, it seems plausible that behavioural economics will make the dismal science more appealing to women.

Perhaps we should simply wait, then, and the women will break through? That seems doubtful. The American Economic Association publishes a regular newsletter from the “Committee on the Status of Women in the Economics Profession”, and two summers ago the newsletter pondered a depressing trend — or rather a depressing lack of a trend.

“The fraction of all bachelor of arts candidates majoring in economics has not budged much over the past decade,” wrote Cecilia Conrad of the MacArthur Foundation, co-editor of the newsletter. There are two or three male undergraduate economists for every female undergraduate economist. That is not good — but at least the ratio isn’t getting worse. In the UK, the ratio is similar but has shown a marked decline between 2002 and 2013. The basic explanation is lack of demand: too few women wish to study economics.

Tempting as it might be to blame the financial crisis for this trend, the sharpest movement occurred a decade ago, before the crisis hit. The downward trend in the percentage of women enrolled in UK undergraduate economics courses has continued since then but much more slowly.

One possible explanation is that unconscious sexism is depriving young women of role models in the profession. Justin Wolfers, an economist and New York Times contributor, recently complained that when journalists wrote about economics research with both male and female authors, journalists routinely quoted the man or cited the paper as though the man were the senior author.

(My own confession: I once committed the same sin in demoting Wolfers’ co-author Betsey Stevenson when she should have been cited first. And another confession: one indignant reviewer on Amazon pointed out that my first book, The Undercover Economist, habitually uses “he”, “his” and “him” in situations where female pronouns would have been perfectly proper. These days I try to do better.)

Unconscious sexism is hardly the exclusive preserve of economics, though, so what else is going on? Diane Coyle, an economist and FT contributor, wonders if mathematics is the problem, since girls are less likely than boys to study mathematics at A-level. She recently sounded a call to arms on her blog: “Girls, women, brush up on the maths a bit if you need to, but above all come and study economics!”

I can’t disagree with that but we cannot just blame imbalances in mathematics. A recent study by Mirco Tonin and Jackie Wahba of the University of Southampton found that even among A-level maths students, girls were less likely than boys to choose economics. And in the United States, where economics degrees are just as male-dominated as in the UK, girls have long since achieved gender parity in advanced high-school maths.

The sad truth is that economics just does not seem to be terribly attractive to young women. That is a shame, and it has financial consequences, since economics graduates tend to be well paid. But the real loss is to economics. If we cannot make the subject relevant to half the world, we have a problem.

‘People respond in profound ways to tax incentives. They adjust their behaviour to avoid tax’

The adage ‘free as air’ has become obsolete by Act of Parliament,” thundered Charles Dickens in 1850. “Neither air nor light have been free since the imposition of the window tax. We are obliged to pay for what nature lavishly supplies to all, at so much per window per year; and the poor who cannot afford the expense are stinted in two of the most urgent necessities of life.”

Dickens prevailed: the window tax, which had been levied in England since 1696, was abolished within a year. But the curious story of the tax, explored recently by Wallace Oates and Robert Schwab in the Journal of Economic Perspectives, holds lessons for us today.

The details of the tax varied across the centuries but with the broad theme that the more windows your house had, the more tax you had to pay. At first glance, the tax seems clever, even brilliant. Rich people had larger houses, and so paid more tax. Windows are easy to count from outside the premises, so the tax was easy to assess. Poor people didn’t own large houses, so they weren’t affected by the tax. And the number of windows in a house doesn’t change, so the tax was impossible to avoid.

Wrong, wrong, wrong.

The tax was probably progressive but not nearly as progressive as it might seem. Many poor people did live in large houses — as servants or in tenement blocks. They suffered from the tax, as we shall see. Adam Smith himself, in The Wealth of Nations (1776), nailed the other problem with the idea that the tax was paid only by the rich: “A house of £10 rent in the country may have more windows than a house of £500 rent in London.”

A more fundamental error is the idea that architecture doesn’t respond to tax incentives. When William Pitt tripled the tax in 1797, thousands of windows were bricked or boarded up almost overnight. Later, the president of the society of carpenters in London told Parliament that almost every homeowner on Compton Street had approached him to reduce the number of windows. A new apartment building in Edinburgh was designed with an entire second floor filled with windowless bedrooms.

When Dickens complained that the poor were being denied light and air, he wasn’t speaking figuratively. Poor people did not have to pay the tax out of their own pockets but their landlords did, and the poor dwelt in stuffy darkness as a result.

After 1747, the window tax followed a strange structure. Houses with fewer than 10 windows paid no window tax; those with 10-14 windows paid six pence per window per year. As a result, the cost of having a 10th window was that you also had to pay tax on the other nine. Tax wonks call such discontinuities “notches”, and there were further notches at 15 and 20 windows.

If these notches seem absurd to you, modern governments don’t seem to have a problem with them. Stamp duty, a tax on property transactions in England and Wales, contained notches until last year, and the UK income tax system recently acquired a new notch: a transferable tax allowance for married couples, worth more than £200, evaporates abruptly if one of the couple strays into a higher tax bracket, even by a single pound. All of this distorts our behaviour.

Nonsensical as the notches are, they help economists to see the effect of the tax. In the recent study, Oates and Schwab combed through tax records from the mid-1700s. They found that nearly 50 per cent of the total number of houses had the tax-efficient totals of nine, 14 or 19 windows — an intelligent response to a foolish tax. People who bricked up a couple of windows to bring the total down to nine were inconvenienced by the tax, yet the Treasury earned no revenue from them. Most taxes will produce some of this sort of waste but the window tax was particularly egregious.

If it seems strange that tax policy could shape the architecture of a country, consider New Orleans’s distinctive camelback houses, one storey high at the front (the part of the home that’s taxable) but with two storeys at the back — a tax-efficient architectural style.

And ponder the research of economists Joshua Gans and Andrew Leigh, who noted that twice as many births were recorded in Australia on July 1 2004 than on June 30 2004. Why? The July babies were eligible for a “baby bonus” of A$3,000 and the June babies were not. Gans and Leigh even found that many Australians delayed their deaths — or perhaps the moment their deaths were recorded — long enough to escape inheritance tax when it was abolished on July 1 1979. If our births and deaths respond to tax incentives, it shouldn’t be surprising that a few windows might be bricked up.

There is a useful lesson to be learnt from the window tax: it is that people will respond in quite profound ways to tax incentives. That is why economists often call, more in hope than expectation, for a tax on carbon emissions. People would adjust their behaviour to avoid the tax, which is exactly what we need.

But perhaps a more realistic lesson is this: it’s perfectly possible for a bad tax to last for 155 years.

‘Coming up with something new is for suckers; smart people sit back and rip off the idea later’

In 1737, a self-taught clockmaker from Yorkshire astonished the great scientists of London by solving the most pressing technological problem of the day: how to determine the longitude of a ship at sea. The conventional wisdom was that some kind of astronomical method would be needed. Other inventors suggested crackpot schemes that involved casting magic spells or ringing the world with a circle of outposts that would mark the time with cannon fire.

John Harrison’s solution — simple in principle, fiendishly hard to execute — was to build an accurate clock, one that despite fluctuating temperatures and rolling ocean swells, could show the time at Greenwich while anywhere in the world. Harrison and countless other creative minds were focused on the longitude problem by a £20,000 prize for the person who solved it, several million pounds in today’s money.

Why was the prize necessary? Because ideas are hard to develop and easy to imitate. Harrison’s clocks could, with effort, have been reverse engineered. An astronomical method for finding longitude could have been copied with ease. Inventing something new is for suckers; smart people sit back and rip off the idea later. One way to give non-suckers an incentive to research new ideas, then, is an innovation prize — that is, a substantial cash reward for solving a well-defined problem. (Retrospective awards such as the Nobel Prize are different.)

For decades after Harrison’s triumph, prizes were a well-established approach to the problem of encouraging innovation. Then they fell out of favour, with policymakers instead encouraging innovation with a mix of upfront research grants and patent protection. Now, however, prizes are making a comeback. The most eye-catching examples have been in the private sector: the $1m Netflix prize for improved personalisation of film recommendations or the $10m Ansari X prize for private space flight. Last year Nesta, a UK-based charity for the promotion of innovation, launched a “new longitude prize” of £10m for an improved test for bacterial infections, marking the anniversary of the original prize’s founding in 1714.

But the big money potential is in the public sector. In 2007, several governments (and the Gates Foundation) promised a $1.5bn prize for a vaccine for pneumococcal meningitis. The prize, called an “advanced market commitment”, is structured as a dose-by-dose subsidy rather than one giant cheque. It is being paid out and millions of children have already been vaccinated. Much bigger commitments are possible: before US senator Bernie Sanders began his run for the presidency, he introduced two Senate bills that would have provided almost $100bn a year as medical innovation prizes.

But why are innovation prizes attractive, when the existing system of grants and patents seems to have served us reasonably well so far?

Research grants may be too conservative, favouring establishment figures working on unambitious projects, and rewarding process rather than results. Such conservatism is not inevitable but it goes with the territory. An innovation prize seems more meritocratic and, since it pays only for results, the prizes can set radical goals.

Patents are particularly problematic, since they encourage the development of something that anyone can use — a new idea — with the perverse reward of restricting access to that idea. That is a trade-off that is easily bungled, with patents that last too long, are too broad, too easy to secure and too difficult to challenge.

Even a well-crafted patent system depends on there being a ready market for the innovation in question. Few people will pay much for a malaria vaccine but it would be socially very valuable, as would a new class of antibiotics. A prize can easily reward long-term social priorities such as these; a patent cannot.

But there is a danger of expecting too much from prizes. If we are to scrap patents entirely, prizes would be far too narrow a replacement. (Who would have sponsored a prize “for inventing the internet”? Not all innovations exist to solve precooked problems such as finding longitude.) If we use patents and prizes in parallel, however, there’s a self-selection problem: inventors with truly valuable ideas apply for patents, while those with dross apply for prizes. A new working paper from economic historian Zorina Khan points out that Royal Society of Arts prizes in the 19th century suffered from exactly such adverse selection.

Khan also observes that many celebrated historical innovation prizes were actually mired in controversy, with prizes awarded for unoriginal or ineffective ideas, or denied to the deserving. It’s easy to point to a few success stories but there are plenty of those for patents and grants too.

For my money the patent system urgently needs reform, with patents that are harder to earn and easier to challenge. Innovation prizes definitely have their place, especially where markets for a socially valuable innovation may not exist. But we do a good idea no favours by overselling it. We should also probably stop going on about the Longitude Prize or at least we should admit what Nesta’s new prize website does not: that Harrison’s invention was rewarded with decades of suspicion and controversy. The Board of Longitude, the government body set up to administer the prize, questioned both the accuracy of his clocks and whether they could be replicated. Harrison did receive numerous payments for his efforts — but neither he nor anyone else ever won the Longitude Prize.

‘The Budget bears not even a passing resemblance to how we economists were taught that taxes should work’

I sometimes feel that seeing the world through the eyes of an economist is like seeing the world through the ears of a bat. We notice a lot that others miss, and we miss a lot that others notice.

The annual rituals of the chancellor’s Budget, and next week’s Autumn Statement — focal points of the British political calendar — are good examples. Tax thresholds are nudged up and down, allowances introduced and withdrawn, and much attention is given to the tax on a pint of beer. None of this performance bears even a passing resemblance to how we economists were taught that taxes should work.

So how would the Budget look if designed by economists? Economists’ ideas about taxation are based on three pillars. The first, developed by Cambridge economist Arthur Pigou in 1920, is that we should tax things that have unpleasant spillover effects on bystanders — “externalities”. The classic example is to tax activities that produce pollution. A few Pigouvian taxes exist in the UK but they are patchy: the tax on petrol is implausibly high, for example, while that on domestic fuel is strangely low.

The second pillar was erected in 1927 by another Cambridge man, Frank Ramsey, shortly before his death at the age of 26. Ramsey showed that taxes should be focused on products that aren’t very responsive to price. This is because a tax on a price-sensitive good will simply destroy demand. The consumer won’t buy the good, the retailer can’t sell the good, and the taxman doesn’t collect any tax. Everyone loses out.

Ramsey’s ideas, too, are patchily implemented. Basic foodstuffs such as rice and bread might look like excellent candidates for high taxes in the pages of a learned journal but less so on the front page of a newspaper.

The third pillar was unveiled in 1971 by James Mirrlees — now a Nobel laureate — who tried to figure out what could be said about optimal income taxation. One of his conclusions, surprising to him as much as anyone else, was that an optimal income tax might impose flat or even falling marginal tax rates.

This counter-intuitive idea requires us to see the difference between the marginal rate of tax — the headline rate, paid on each extra pound earned — and the average rate of tax that an individual pays. The two can be very different: if everyone pays a marginal tax rate at 50 per cent, with a £10,000 allowance, then someone with an income of £10,000 pays no tax; an income of £20,000 will attract a 25 per cent average rate; an income of £1m will attract a 49.5 per cent average rate. Yet while the tax burden is progressive, everyone must give half of any extra earnings to the taxman.

Why did Mirrlees argue that the best marginal rate might be falling or flat, as in the example above? The answer is that high marginal rates on top incomes are almost a pure discouragement for the rich to earn money. But high marginal rates on lower incomes will raise money from lots of people, without discouraging work. If you earn £40,000 and the chancellor raises income tax in the £20K to £30K band, that should encourage you to work harder.

This isn’t conclusive proof that marginal tax rates should fall rather than rise — there are lots of other factors at play — but it was a surprising and powerful argument, and one of the few that politicians did seem to absorb.

These three pillars have been standing for a while. So what’s new in the economics of taxation?

. . .

One answer: better data. The next generation of economists, people such as Raj Chetty and Amy Finkelstein, are drawing on data that the likes of Pigou and Ramsey could hardly have imagined. As a result they are able to blend ideas from mainstream economic theory with the psychological insights beloved of behavioural economists. It’s a pragmatic approach, depending on the problem at hand and what the data tell us.

A few years ago, Finkelstein looked at what happens when tollbooths offer electronic toll collection, allowing drivers to breeze through without fiddling for change. She found persuasive evidence that the electronic toll weighed less heavily in people’s minds — they forgot exactly what the price was and began to ignore it. Toll collectors, quite rationally, respond by raising the toll.

More recently, Chetty and co-authors tried to estimate whether the earned income tax credit (EITC) in the United States, a work-related subsidy paid to parents, encouraged people to work more. With a truly mind-boggling dataset boasting 78 million taxpayers and 1.1 billion income statements, they found that the EITC can work very well — if people know about it. In areas with lots of claimants, new parents tended to be well-informed and to respond to the EITC. In other areas, the EITC was not widely understood, and it was less effective.

Perhaps this new data-driven, psychologically realistic approach to tax will win political support. After all, Finkelstein discovered a tax that works best when concealed, while Chetty found a benefit payment that works best when widely trumpeted. Boasting about the good news and hiding the bad? That is the kind of economic theory that any politician can love.

‘It is not clear that the US economy has suffered much from terrorism, even from the enormity of 9/11’

This article was published before the November 13 terrorist attacks in Paris.

On a long-haul flight recently, I was jerked from the usual concerns over legroom and a power socket by a memory. I recalled the flight I had taken a few weeks after watching the Twin Towers of the World Trade Center collapse on television. It was an eerily quiet journey from London to Cape Town. I was in a state of mortal fear. But despite occasional grim reminders that terrorists can kill, my dread then seems foolish to me now.

Every violent death is an awful thing but there are many other ways to die a violent death, even in a rich country. Each year, one in 8,000 Americans kill themselves — and each year an American citizen has a one in 9,000 chance of dying in a motor vehicle accident, and a one in 20,000 chance of being a victim of murder or manslaughter. Even in 2001, the chance of an American being killed by a terrorist was less than one in 100,000. In more typical years the figure is one in 10 million. For Americans, terrorists are about as dangerous as lightning strikes.

These dry statistics do not diminish the anguish of those who have lost a loved one to a terrorist attack. Terrorism is no trivial thing; but losing a daughter to suicide or a son in a motorcycle accident is not trivial either, and it is something many more people must endure.

There are other costs to terrorism, deftly surveyed in Alan Krueger’s 2007 book, What Makes a Terrorist. In 2003, economists Alberto Abadie and Javier Gardeazabal published an estimate of what Eta’s terrorist campaign — which at the time had killed 800 people — might be doing to the economy of the Basque Country. Abadie and Gardeazabal estimated that the attacks had, over time, reduced the gross domestic product of the region by 10 per cent. A year later, Zvi Eckstein and Daniel Tsiddon applied a different method to a different country — Israel — but produced the same estimate of the costs: GDP down by 10 per cent because of terrorist attacks. If correct, these are very large costs. (Even the suspicion of an attack on a Russian passenger plane over Egypt — still unconfirmed as I write — is damaging the tourism industry in Sharm el-Sheikh.)

But it is less clear that the US economy has suffered much from terrorism, even from the enormity of 9/11. Official estimates were that the attack on Manhattan destroyed more than $13bn of office space and damaged almost $17bn more. Perhaps 75,000-100,000 jobs were lost in the immediate wake of the attack, particularly in travel and tourism. Yet the received wisdom — summarised in a 2005 book, Resilient City — is that New York bounced back rapidly, recovering the obvious economic losses within about a year. Rebuilding physical infrastructure took longer but in a city such as New York, buildings are demolished and replaced all the time. In the interim, people squeezed into tighter spaces, or companies rented space in suddenly empty hotels while things were sorted out. New York adapted.

This is encouraging and should not be entirely surprising. Natural disasters such as earthquakes can do far more damage, and economies recover from them, too. The classic study here is economist George Horwich’s analysis of the impact of the earthquake that devastated Kobe, Japan, in 1995. The earthquake destroyed 100,000 homes and made 300,000 people homeless. Yet 15 months after the disaster, Kobe’s manufacturing output was back to 98 per cent of pre-quake levels.

The recovery was not complete: there was no serious effort to resurrect industries that were already under pressure from foreign competition, such as the plastic shoe business. But many of the industries that were flourishing before the disaster were flourishing again in time.

Perhaps the true impact of terrorism is psychological — the clue is in the name. A few months after 9/11, a small plane flew into the Pirelli Tower in Milan. The news that this was not a terrorist attack provoked widespread relief. That relief (which I shared) is strange. The Pirelli crash killed three people; knowing that the crash was an accident does not make them any less dead. But it makes their deaths less unsettling.

. . .

There have been attempts to measure the psychological impact of terrorism. One plausible finding, from a team led by psychologist Roxane Cohen Silver, is that 60 per cent of Americans suffered some symptoms of anxiety in the weeks immediately following the 9/11 attacks — but that figure soon ebbed to 30 per cent within two months and 10 per cent within half a year. The attack seems to have had the same effect on the American psyche as it did on the New York economy: a severe but transitory impact.

Despite all the evidence that even the most grotesque acts of terrorism have a transitory effect, it remains a popular tactic. The reason for that is perhaps best summarised in Eric Frank Russell’s 1957 novel, Wasp, about a terrorist. The title refers to the tale of a tiny wasp, armed with a sting it does not even use, causing the deaths of four people. They’re in a car; the driver, agitated by the wasp, crashes and kills them all.

The terrorists’ best hope lies in provoking an overreaction. Too often, they succeed.

‘The simplest explanation for lengthy disputes? That people misperceive their chances of winning’

From a purely rational perspective, costly arguments are puzzling. A divorce case that goes to court, an industrial dispute that leads to a strike, even the extreme case of a war — all these things are, to put it in the mildest possible terms, a waste.

Of course, there will always be conflicts — but logical people should resolve them quickly. Consider a simplified model of this process. Two people, Amy and Ben, are arguing over how to divide a baked Alaska, the centre of which is gradually melting. First Amy makes an offer. Ben may accept it or propose a counter-offer. Amy, in turn, may accept that or make a counter-counter-offer. Each time an offer is rejected, the delicious dessert shrinks by 10 per cent. Amy and Ben have opposing interests because each would prefer to have the entire dessert to eat alone. But they also have one thing that they can agree on: given the situation, both would be wise to shake hands on a deal promptly and start eating.

A similar logic suggests that an industrial dispute should never lead to a strike. Instead, employers and unions should see that a strike will cost them both dearly, and find some way to resolve their differences. Civil litigants should always agree a settlement before having to go through costly legal proceedings. Often, this is what happens — but not always. Why?

Economists have a few ideas. One cynical suggestion is that some people are playing a different game. Perhaps a belligerent politician or union leader would find his or her position strengthened by a strike. A general might desire a war. Lawyers might profit from urging their clients to go to court.

Another possibility is that people need to signal their willingness to fight in battle after battle. Imagine a large company is being sued by a small competitor for some transgression. If it settles out of court, other competitors will scent blood and dart in like piranhas, so it fights a costly case to scare other would-be litigants away.

But the simplest explanation is that people misperceive what is fair and also their chances of winning.

Consider that melting baked Alaska again. The obvious division is a 50/50 split, and in laboratory experiments that is usually what Amy and Ben will agree, and quickly. (The sophisticated equilibrium of this game is not quite 50/50 because Amy has the advantage of moving first, but it’s close.) But what if the disagreement was more complex? For example, what if Amy preferred the meringue topping, which was not melting, and Bob preferred the ice-cream centre, which was? This complication introduces doubt as to what the intuitive split should be. Ben may still see 50/50 as the logical split, while Amy may feel that she is in a stronger position and should get more. Each side may believe that the division that happens to suit them is objectively fairer — a self-serving bias. And indeed, in laboratory experiments players usually fail to reach a swift agreement in such circumstances.

In a more realistic setting, such as an industrial dispute or a legal case, there will typically be several ways of seeing the problem and several different settlements that could be justified as fair. When the disputants fixate on different settlements, agreement may be derailed.

To test this idea, Linda Babcock and George Loewenstein, behavioural economists at Carnegie Mellon University, once asked experimental subjects to ponder the facts of a real tort case from Texas. A motorcyclist had been injured after a collision with a car and sued the driver of that car for $100,000. The subjects were randomly given the role of the motorcyclist or the driver and asked to role-play negotiating a pretrial settlement, to proceed to court if no settlement could be agreed. The experimental pay-offs mimicked the structure of the real case, including a reward for reaching a pretrial deal.

The subjects were also asked to make a guess as to what damages the judge awarded in the real case — with a cash bonus if their guess was accurate. Despite this bonus for accuracy, the “motorcyclists” guessed that the judge had awarded almost $15,000 more than the “drivers” guessed. Their entire view of the case had been biased by their own self-interest. No wonder that plaintiffs and defendants sometimes fail to reach a settlement.

Wasteful conflicts may also occur because of wishful thinking about the outcome. Strikers may assume that an employer will soon cave in to pressure. Litigants may overrate the strength of their case and the competence of their lawyer.

A few years ago, Guy Mayraz, who is now a behavioural economist at the University of Melbourne, conducted a test of wishful thinking. He divided experimental subjects into “farmers”, who benefited from high wheat prices, and “bakers”, who profited when wheat was cheap. Then he showed them historical charts of wheat prices and asked them to make forecasts. Mayraz paid a bonus for accuracy, yet the farmers systematically predicted higher prices than the bakers. This is wishful thinking in its purest form. Whether engaged in a tough negotiation, or simply trying to predict the future, we find it hard to distinguish between what is true, and what we wish was true.

I’ve just been told that the BBC World Service has won this year’s Association for International Broadcasting Radio Journalism award for its coverage of the Ebola outbreak in West Africa. A live special I presented with Solomon Mugera, featuring Hans Rosling and Margaret Lamunu and produced by Ruth Alexander, was singled out for praise. The World Service richly deserves the award and I’m delighted to have made a contribution.