Tuesday, 30 June 2009

Region: With Gary Becker, you developed a theory of “rational addiction.” Could you give us a description of what seems, on its surface, a very counterintuitive concept?

Murphy: OK. Let’s take that rational addiction framework. I guess I’ll tie together—and I think this is what’s important really—the predictions of the theory along with the mechanics of the theory.

We laid out in our analysis how someone would behave who was a perfectly rational individual faced with the notion that if he starts, say, smoking cigarettes, that that will have an effect on his desire to smoke cigarettes in the future—that is, our perfectly rational individual realizes that smoking today raises his demand for smoking in the future. And he takes that into account in his decision-making.

He also takes account of the impact of smoking today on other things in the future, like his future health—smoking today means he’s more likely to get lung cancer or cardiovascular disease.

That theory has some pretty simple implications. One is, if I learn today that smoking is going to harm me in the future, then I will smoke less—that is, people will respond to information about the future.

People will also respond to future prices. If they think cigarettes are going to be more expensive in the future, developing a taste for cigarettes is a more expensive habit, and they will have an incentive to avoid building up a smoking habit.

A major implication that we tried to test in the data was, do anticipated increases in the future price of cigarettes impact smoking today? And what we found when we went to the data was yes, there’s a pretty strong pattern saying that anticipated future changes in the price of cigarettes actually show up as less smoking today.

Now, what’s interesting is you can compare that with what we call a naïve or myopic model. In a myopic model, people don’t look forward and, therefore, they only decide whether to smoke based on the current price of cigarettes. They don’t care about the future price. And the data actually reject that simple myopic model in favor of the rational addiction framework.

So I think the empirical evidence that we found was consistent with the rational addiction model. It was that evidence that convinced us, more than anything, that we were on to something. We wrote down the theory because we wanted to understand, what does the theory have to say? We then took it to the data to say, well, do the data bear out this theory or do they bear out a more traditional theory, that addicts are somehow completely irrational? And we found that the data say, well, people seem to respond at least somewhat in the direction of being rational.

You don’t want to overstate it though. Our data don’t say people are completely rational. It looks like they’re mostly rational is the way I would interpret our data.

Region: Bounded?

Murphy: Well, I don’t know if it’s the same as bounded rationality, but they take account of future prices but not quite as much as the theory would say they should. The myopic theory says there should be a zero. Let’s say as a normalization, the rational addiction framework says you’d get a one; you actually kind of get a number like 0.7 or 0.75. So it’s closer to the rational model than the myopic model, but it’s not a 100 percent victory. It’s a 75 percent victory for the rational model. So it comes out to be a useful model for understanding behavior, but not a perfect model.

Subsequently, others have gone out and modified the model and tried to make it consistent with bounded rationality and hyperbolic discounting and all kinds of other things, so I think there’s been a lot of work that’s built on our model, that tries to help explain that last 25 percent that we missed. But I take it as saying that, look, the model is a very useful model for thinking about the world.

And I don’t think it’s that surprising to people. One of the things that comes into people’s minds when they smoke is, they think about the future, they think about should I really be smoking, it’s bad for me. Most people who quit smoking don’t quit smoking because they don’t enjoy it. Right? There’s nobody out there who said, you know, I quit smoking because I didn’t enjoy smoking. You ever meet anybody who said, I quit because I didn’t enjoy it?

No, people say, I quit because I worried about my health, worried about my children, it costs too much. But very few people stop smoking because they don’t enjoy it. And that tells you immediately that there’s an element of rationality to their decision-making. Maybe not as much as there should be, in some people’s minds, but there’s certainly an element of rationality in the smoker’s mind.

If you ask people who don’t smoke why they don’t smoke, there’s an element of rationality too. They say, well, I don’t want to smoke because I don’t want to get addicted and I don’t want the bad health consequences. So I don’t find it surprising that a model that says that people look forward has some predictive power. I think a lot more people would smoke if they didn’t worry about the future.

This bit is interesting,

Most people who quit smoking don’t quit smoking because they don’t enjoy it. Right? There’s nobody out there who said, you know, I quit smoking because I didn’t enjoy smoking. You ever meet anybody who said, I quit because I didn’t enjoy it?

So people get benefits from smoking, they actually enjoy doing it. But what about alcohol? Any benefits from drinking alcohol? May be we should commission a report to find out.

Saturday, 27 June 2009

Deepak Lal is the James S. Coleman Professor of International Development Studies at the University of California at Los Angeles, Professor Emeritus of political economy at University College London, President of the Mont Pelerin Society and a Senior Fellow of Adam Smith Institute.

He was a member of the Indian Foreign Service (1963-66) and has served as a consultant to the Indian Planning Commission, the World Bank, the Organization for Economic Cooperation and Development, various UN agencies, South Korea, and Sri Lanka. From 1984 to 1987 he was research administrator at the World Bank.

Lal is the author of a number of books, including The Poverty of Development Economics; The Hindu Equilibrium; Against Dirigisme; The Political Economy of Poverty, Equity and Growth; Unintended Consequences: The Impact of Factor Endowments, Culture, and Politics on Long-Run Economic Performance; and Reviving the Invisible Hand: The Case for Classical Liberalism in the 21st Century.

Friday, 26 June 2009

As Eric Crampton has noted over at Offsetting Behaviour, Treasury has weighed in on the BERL report into the social costs of alcohol and drugs and they don't seem too pleased with either the BERL report or the use of it by the Law Commission. See the NBR article here.

On the Burgess and Crampton response to BERL, the Deputy Secretary of the Treasury Peter Bushnell says,

“I think the points they’re making are sound about adding the costs of production into the cost of it, and not counting any benefits. In a market if you’re selling something that people are prepared to pay for, then they’ve at least got that much benefit, otherwise they wouldn’t have bought the stuff. So if you exclude the benefits then you’re clearly only looking at one side of the story.”

“I can see the point being made in the article – it looks pretty shonky” said Dr Bushnell. “I think the fact that some work’s done that academic review says is pretty shonky is a problem by itself.

"We don't know what those dudes at BERL were smoking when they wrote this, but we're upset they've been holding out on us. If they just hand over the drugs, we at least can promise not to push fuzzy-headed economics and public policy while using them."

Bushnell goes on to say,

[...] the onus should be on the Law Commission to be rigorous Dr Bushnell said.

“Geoffrey’s reputation is reduced [if] he’s putting weight on something that actually doesn’t stack up. So the Law Commission ought to ... build in processes that give adequate QA and so on.

“What we’re saying is it’s your reputation that’s at risk here. It doesn’t reflect well on the Law Commission if it ... backs [work], that doesn’t have a sound basis.”

The interesting thing here is that this is a very strong statement coming from a very senior member of the Treasury. It is unusual to see such statements. Treasury can not be happy.

The NBR also says

Sir Geoffrey was overseas when contacted by NBR, and has declined to comment on the matter thus far.

Is he running for cover? It will be interesting to see what he says, if anything, on the matter when he returns from overseas.

Meanwhile, BERL are still yet to comment on Treasury’s bollocking of their work. At this point, the last word from “BERL Chief Economist Ganesh Nana” is that “BERL stands by its report.” If that’s still the case, I’d suggest you start discounting everything they say.

Thursday, 25 June 2009

Hundreds of New York City public school teachers accused of offenses ranging from insubordination to sexual misconduct are being paid their full salaries to sit around all day playing Scrabble, surfing the Internet or just staring at the wall, if that's what they want to do.

Because their union contract makes it extremely difficult to fire them, the teachers have been banished by the school system to its "rubber rooms" — off-campus office space where they wait months, even years, for their disciplinary hearings.

The 700 or so teachers can practice yoga, work on their novels, paint portraits of their colleagues — pretty much anything but school work....Because the teachers collect their full salaries of $70,000 or more, the city Department of Education estimates the practice costs the taxpayers $65 million a year.

Exit barriers are in effect entry barriers. Why would you employ anyone as a teacher if it is this difficult to get rid of them if they turnout not to be up to it? The employment process must be hell since you just can't take a chance on picking the wrong teacher.

Wednesday, 24 June 2009

One extremely popular class would be “Maximising Your CEO Pay”. This columnist once heard Mr Welch tell a chief executives’ boot-camp that the key was to have the compensation committee chaired by someone older and richer than you, who would not be threatened by the idea of your getting rich too. Under no circumstances, he said (the very thought clearly evoking feelings of disgust), should the committee be chaired by “anyone from the public sector or a professor”.

In their book "Pay Without Performance: The Unfulfilled Promise of Executive Compensation", Lucian Bebchuk and Jesse Fried argue that executive compensation is set by managers themselves to maximise their own pay, rather than by boards looking after the interests of shareholders. Some commentators have gone so far as to argue that executives’ pay schemes were major contributors to the financial crisis, encouraging them to take on too much risk and manage their company for short-term profit.

In a column at VoxEU.org Alex Edmans and Xavier Gabaix propose a solution to address the economic issues that are at heart of the current crisis to prevent future value destruction. Edmans and Gabaix argue that existing payment schemes have two major problems,

First, stock and options typically have short vesting periods, allowing executives to “cash out” early. For example, Angelo Mozilo, the former CEO of Countrywide Financial, made $129 million from stock sales in the twelve months prior to the start of the subprime crisis. This encourages managers to pump up the short-term stock price at the expense of long-run value – for instance by originating risky loans, scrapping investment projects, or manipulating earnings – because they can liquidate their holdings before the long-run damage appears. Long-term incentives must be provided for the manager to maximise long-term value, which we call the “long-horizon principle.”

Second, current schemes fail to keep pace with a firm’s changing conditions. If a company’s stock price plummets, stock options are close to worthless and have little incentive effect – precisely at the time when managerial effort is particularly critical. This problem may still exist even if the executive has only shares and no options. Consider a CEO who is paid $4 million in cash and $6 million in stock. If the share price halves, his stock is now worth $3 million. Exerting effort to improve firm value by 1% now increases his pay by only $30,000 rather than $60,000 and may provide insufficient motivation. To maintain incentives, the CEO must be forced to hold more shares after firm value declines. Our research has shown that, to motivate a manager, a given percentage increase in firm value (say 10%) must generate a sufficiently high percentage increase in pay (say 6%). In the above example, this is achieved by ensuring that, at all times, 60% of the manager’s pay is stock. We call this the “constant percentage principle.” The appropriate proportion will vary across firms depending on their industry and life cycle, but we estimate 60% as a ballpark number for the average firm.

The "long-horizon principle" and the "constant percentage principle" can be achieved by giving the executive a scheme Edmans and Gabaix call an "Incentive Account". Their scheme

[...] contains two critical features – rebalancing to address the constant percentage principle and gradual vesting to satisfy the long-horizon principle. Each year, the manager’s annual pay is escrowed in a portfolio to which he has no immediate access. In the above example, 60% of the portfolio is invested in the firm’s stock and the remainder in cash. As time passes and the firm’s value changes, this portfolio is rebalanced monthly so that 60% of the account remains invested in stock at all times. In our example, after the stock price halves, the Incentive Account is now worth $7 million ($4 million cash and $3 million of stock). This requires the CEO to hold $4.2 million of equity, which is achieved by using $1.2 million of cash to buy stock. This satisfies the “constant percentage principle” and maintains the manager’s incentives after firm value has declined. Importantly, the additional stock is accompanied by a reduction in cash – it is not given for free. This addresses a major concern with repricing stock options after the share price falls – the CEO is rewarded for failure.

Each month, a fixed fraction of the Incentive Account vests and is paid to the executive. Even when the manager leaves, he does not receive the entire value of the Incentive Account immediately. Instead, it continues to vest gradually; full vesting will occur only after several years. By then, most manipulation or hidden risk will have become public information and affected the stock price and thus the account’s value. Since the manager has significant wealth tied in the firm even after his departure, he has fewer incentives to manipulate earnings in the short term.

While the Incentive Account may seem a marked departure from current practices, it can be approximately implemented using standard compensation instruments without setting up a special account. In each period, the board pays the CEO a mix of deferred (cash) compensation and restricted stock. If performance is poor, the next period the CEO’s salary is paid exclusively in restricted stock; upon strong performance, it is paid exclusively in deferred cash.

Edmans and Gabaix note that ideas of gradual vesting is not without its costs. When compared to short-term vesting, it imposes more risk on the executive and they may argue for a higher salary as compensation for this risk. But the benefits of a high-powered incentive scheme are much greater than its costs. Edmans and Gabaix point out that even if an optimal contract induces the CEO to increase firm value by only an additional 1%, this is $100 million when applied to a $10 billion firm. Such an increase in value vastly exceeds any required compensation for any additional risk being borne by the executive. For a given vesting period and target incentive level, Edmans and Gabaix can demonstrate mathematically that Incentive Accounts are always less costly than other common schemes such as stock options, restricted stock, clawbacks, and bonus-malus banks.

Tuesday, 23 June 2009

Certainly behavioral economics is all the rage these days. The casual reader might have the impression that the rational homo economicus has died a sad death and the economics profession has moved on to recognize the true irrationality of humankind. Nothing could be further from the truth.

He continues,

The modern paradigmatic man (or more often these days woman) in modern economics is that of a decision-maker beset on all sides by uncertainty. Our central interest is in how successful we are in coming to grips with that uncertainty. My goal in this lecture is to detail not the theory as it exists in the minds of critics who are unfamiliar with it, but as it exists in the minds of working economists. The theory is far more successful than is widely imagined – but is not without weaknesses that behavioral economics has the potential to remedy.

Levine goes on to point out that while laboratory experiments have shown up a number of anomalies with the standard theory, it should not be overlooked that the theory works remarkably well in the laboratory.

One of the most widespread empirical tools in modern behavioral economics is the laboratory experiment in which people – many times college undergraduates, but often other groups from diverse ethnic backgrounds – are brought together to interact in artificially created social situations to study how they reach decisions individually or in groups. Many anomalies with theory have been discovered in the laboratory – and rightfully these are given emphasis among practitioners, as we are most interested in strengthening the weaknesses in our theories. However, the basic fact should not be lost that the theory works remarkably well in the laboratory.

Levine goes on to discuss areas where the theory works, such as voting, and areas where it doesn't, such as ultimatum bargaining. He then discusses learning and self-confirming equilibrium.

Learning and incomplete learning – whether or not we regard this as “behavioral” economics – are an important part of mainstream economics and have been for quite some time. An important aspect of learning is the distinction between active learning and passive learning. We learn passively by observing the consequences of what we do simply by being there. However we cannot learn the consequences of things we do not do, so unless we actively experiment by trying different things, we may remain in ignorance.

As I indicated, the notion of self-confirming equilibrium from Fudenberg and Levine [1993] captures this idea. A simple example adapted from Sargent, Williams and Zhao [2006a] by Fudenberg and Levine [2009] shows how this plays a role in mainstream economic thought. Consider a simple economic game between a government and a typical or representative consumer. First, the government chooses high or low inflation. Then in the next stage consumers choose high or low unemployment. Consumer always prefer low unemployment, while the government (say) gets 2 for low unemployment plus a bonus of 1 if inflation is low. If we apply “full” rationality (subgame perfection), we may reason that the consumer will always choose low unemployment. The government recognizing this will always choose low inflation. Suppose, however, that the government believes incorrectly that low inflation leads to high unemployment – a belief that was widespread at one time. Then they will keep inflation high – and by doing so never learn that their beliefs about low inflation are false. This is what is called a self-confirming equilibrium. Beliefs are correct about those things that are observed – high inflation – but not those that are not observed – low inflation.

Next Levine explains that while behavioural economics points to many paradoxes and problems with mainstream economics, their own models and claims are often not subject to a great deal of scrutiny. He then examines some popular behavioural theories and discusses the relationship between psychology versus economics. He notes,

Much of behavioral economics arises from the fact that people have an emotional irrational side that is not well-captured by mainstream economic models. By way of contrast, psychologists have long been fascinated with this side of humankind, and have many models and ideas on the subject. Not surprisingly much of behavioral economics attempts to import the ideas and models developed by psychologists.

[...]

The key difference between psychologists and economists is that psychologists are interested in individual behavior while economists are interested in explaining the results of groups of people interacting. Psychologists also are focused on human dysfunction – much of the goal of psychology (the bulk of psychologists are in clinical practices) is to help people become more functional. In fact, most people are quite functional most of the time. Hence the focus of economists on people who are “rational.” Certain kinds of events – panics, for example – that are of interest to economist no doubt will benefit from understanding human dysfunctionality. But the balancing of portfolios by mutual fund managers, for example, is not such an obvious candidate. Indeed one of the themes of this essay is that in the experimental lab the simplest model of human behavior – selfish rationality with imperfect learning – does an outstanding job of explaining the bulk of behavior.

In summary Levine has this to say,

A useful summing up is by considering the main theme of this lecture: that behavioral economics can contribute to strengthening existing economic theory, but, at least in its current incarnation, offers no realistic prospect of replacing it. Certain types of “behavioral” models are already important in mainstream economics: these include models of learning; of habit formation; and of the related phenomenon of consumer lockin. Behavioral criticisms that ignore the great increase in the scope and accuracy of mainstream theory brought about by these innovations miss the mark entirely. In the other direction are what I would describe as not part of mainstream economics, but rather works in progress that may one day become part of mainstream economics. The ideas of ambiguity aversion, and the related instrumental notion that some of the people we interact with may be dishonest is relatively new and still controversial. The use of models of level-k thinking to explain one-time play in situations where players have little experience works well in the laboratory, but is still unproven as a method of analyzing important economic problems. The theory of menu choice and self-control likewise has still not been proven widely useful. The theory of interpersonal (or social) preferences is no doubt needed to explain many things – but so far no persuasive and generally useful model has emerged.

Michael Munger, of Duke University, talks with EconTalk host Russ Roberts about franchising, particularly car dealerships. Munger highlights how the dealers used state regulations to protect their profits and how bankruptcy appears to be unraveling that strategy. The main themes of the conversation are the incentives in the franchising relationship and the evolution of the auto industry in the United States over the last forty years.

Monday, 22 June 2009

When governments lose power it is often blamed, at least in part, on the state of the economy. The standard story would be that when the economy is doing badly a government is more likely to lose power. Of course a bad economy may just be bad luck, say unfortunate external conditions, rather than mismanagement by the incumbent government. Can voter tell the difference and do they vote differently when they can?

Andrew Leigh look at this question in a paper, "Does the World Economy Swing National Elections?", in the Oxford Bulletin of Economics and Statistics, Vol. 72, No. 2. The abstract reads,

Do voters reward national leaders who are more competent economic managers, or merely those who happen to be in power when the world economy booms? Using data from 268 democratic elections held between 1978 and 1999, I compare the effect of world growth (luck) and national growth relative to world growth (competence). Both matter, but the effect of luck is larger than the effect of competence. Voters are more likely to reward competence in countries that are richer and better educated; and there is some suggestive evidence that media penetration rates affect the returns to luck and competence.

The paper provides evidence that voters commit systematic attribution errors when casting their ballots – tending to oust their national leaders when the world economy slumps and retain them when it booms. Across a wide range of countries, voters appear to behave only quasi-rationally. Is anyone really surprised by that? Note that any given individual voter has little incentive to try to distinguish a lucky government from a skilful one since we all know that elections are almost never decided by a single vote, and so each voter would be right to conclude that her vote is highly unlikely to make a difference.

What factors are associated with voters rewarding competence and luck? In countries with a richer and better educated population, voters are better able to parse out competence from luck in deciding whether to re-elect their national leaders. Leigh also find suggestive evidence that the media affects the returns to luck and competence, though these effects seem to differ across media types. Countries with high newspaper circulation have voters better able to distinguish luck from skill. Radio does not help, and television makes things worse. Well, given the standard of economic reporting on New Zealand television that last result doesn't exactly surprise me.

Advocates of privatisation have often paid insufficient attention to one of the most important reasons why scholars like Friedman and Hayek argued in favour of privatisation: that people are the best judges of how to spend their own money and, moreover, that they have a right to spend their own money as they wish. Privatisation must not be separated from the broader libertarian project of making government smaller and giving people control of their own lives - which includes their own money.

[...]

Those who believe in freedom should therefore not uncritically praise privatisation. It should be supported solely as a means to the end of increasing individual freedom by giving people back more of their own money to spend. Where privatisation becomes a backdoor way of expanding the role of the state and thereby reducing people’s freedom this should be exposed and criticised.

This extends the argument for privatisation beyond that of the purely economic into a general political argument for freedom. It also reminds us not to be uncritical of some of the other arguments made in support of privatisation.

Sunday, 21 June 2009

In this audio from VoxEU.org, Paul Grout of the Centre for Market and Public Organisation (University of Bristol) talks to Romesh Vaitilingam about his report, Private Delivery of Public Services, which surveys the theory and evidence on three models of private sector involvement in the delivery of public services: privatisation; public-private partnerships; and not-for-profit organisations.

Liberty Scott on Local government cargo cult - Hawke's Bay Airport. The airport should be privatised, the government should flog off its ownership so that a private owner can put in some directors with some business acumen, and the councils should be required to sell off their shares.

Robin Hanson on Why Signals Are Shallow. We all want to affiliate with high status people, but since status is about common distant perceptions of quality, we often care more about what distant observers would think about our associates than about how we privately evaluate them.

Helmut Reisen on Shifting wealth: Is the US dollar Empire falling? If history is any guide, the Chinese renminbi will soon be due to overtake the US dollar, just as the dollar replaced the pound sterling last century. But will the renminbi be ready for reserve currency status? This article discusses the issues at hand and explains why some experts would prefer the IMF’s Special Drawing Rights as the next global reserve currency.

Homepaddock on Milk too expensive or people too poor? Price controls are simply a tax on production and if they were imposed on farmers they’d stop supplying the domestic market in favour of exporting or change from dairying to something more profitable.

Saturday, 20 June 2009

If we assume that consumers can be irrational, an interesting question is, What effect does this have on market outcomes? Do markets become "irrational" because of the irrational participants or does the market filter out the effects of irrational participants?

Given this question, I was interested to come across this paper "The Market: Catalyst for Rationality and Filter of Irrationality" by John A. List and Daniel L. Millimet, The B.E. Journal of Economic Analysis & Policy: Vol. 8: Iss. 1 (Frontiers), Article 47.

Available at: http://www.bepress.com/bejeap/vol8/iss1/art47.

The abstract reads

Assumptions of individual rationality and preference stability provide the foundation for a convenient and tractable modeling approach. While both of these assumptions have come under scrutiny in distinct literatures, the two lines of research remain disjointed. This study begins by explicitly linking the two literatures while providing insights into whether market experience mitigates one specific form of individual rationality—consistent preferences. Using field experimental data gathered from more than 800 experimental subjects, we find evidence that the market is a catalyst for this type of rationality. The study then focuses on aggregate market outcomes by examining empirically whether individual rationality of this sort is a prerequisite for market efficiency. Using a complementary field experiment, we gathered data from more than 380 subjects of age 6-18 in multi-lateral bargaining markets at a shopping mall. We find that our chosen market institution is a filter of irrationality: even when markets are populated solely by irrational buyers, aggregate market outcomes converge to the intersection of the supply and demand functions.

What this suggests is that we must avoid making the leap from individual irrationality to "market failure." List and Millimet suggest that even if there is individual irrationality then market failure does not immediately follow. The List and Millimet results point towards the view that markets actually filter out individual irrationality and thus help channel individual action to social benefit.

This kind of result also backs up the Levitt and List critique of behavioural economics in which they argue that while no one argues that exceptions to the standard rational actor model cannot be found in the lab, there are pressures in the real world that means these exceptions are not of great significance. As Steven D. Levitt and John List have put it,

Perhaps the greatest challenge facing behavioral economics is demonstrating its applicability in the real world. In nearly every instance, the strongest empirical evidence in favor of behavioral anomalies emerges from the lab. Yet, there are many reasons to suspect that these laboratory findings might fail to generalize to real markets. We have recently discussed several factors, ranging from the properties of the situation — such as the nature and extent of scrutiny — to individual expectations and the type of actor involved. For example, the competitive nature of markets encourages individualistic behavior and selects for participants with those tendencies. Compared to lab behavior, therefore, the combination of market forces and experience might lessen the importance of these qualities in everyday markets.

Friday, 19 June 2009

But BERL does more than just step outside of the rational addiction model: they drive gross benefits down to zero for all consumption, including all below-the-threshold consumption, the instant your consumption exceeds their epidemiological threshold.

and

Instead, BERL threw in a step function that I cannot believe is consistent with any plausible utility function: prior to the threshold, benefits at least equal costs; after the threshold, benefits don't just equal zero, they're sufficiently negative to precisely offset all of the gross benefits from any prior consumption. Now, I've conducted an unscientific poll of members of the Department of Economics here at Canterbury. Half of those providing a response say you can't build a utility function that has these characteristics. The other half say that believing any model consistent with those characteristics would itself be evidence of the irrationality of the model's author.

I'm part of the you can't do it half of the survey. Or more correctly I don't think you can write down an utility function with the standard properties which would meet the BERL requirements. In particular I can't see how BERL's function can be continuous. Eric goes on to say,

The best utility function (in my view) of the ones we've come up with has a discontinuity at the harmful threshold that jumps down towards negative infinity for the epsilonth unit after the threshold but then jumps back up to zero for all subsequent units. Or, in discrete terms, benefits are positive and match costs up to the 40th gram of alcohol for men; the 41st gram has very large negative benefits that just offset all of the benefits from the prior 40 grams, and then consumption from the 42nd gram onwards provides zero benefits. Fortunately, I don't believe this model. (Emphasis added.)

The bit I have put in bold above is as far as I can tell, and I may be missing something, what a utility function would have to look like to be consistent with BERL's results. You will note that such a function is not continues, in fact it is discontinues at two points, the 41st gram and the 42 gram of alcohol consumed.

So I'll give a chocolate fish to the first person who can come up with a locally nonsatiated, continuous, concave, monotonic utility function consistent with BERL's results.

The dispute between Hayek and Keynes was over what we call today “macroeconomics.” At the time, this would have been considered monetary or trade cycle theory. Hayek was opposed to the macro-aggregation of Keynes’s approach to questions of employment, interest rates and cycles. He believed that the aggregates chosen by Keynes obscured the fundamental changes that constitute macroeconomic phenomena. As the economist Roger Garrison points out, for Hayek there were indeed macroeconomic phenomena but only microeconomic explanations.

Macroeconomics is where we look at the economy is the aggregate. Issues like inflation, unemployment, government spending, government debt etc. Microeconomcis is where we look at the disaggregated economy, things like firms, consumers and individual markets. The view that Keynes told of the economy was

Keynes focused on the labor market, insufficient aggregate demand and the associated idea of less-than-full-employment income. In effect, Keynes thought of aggregate output as if it were just one undifferentiated thing and investment as a volatile form of spending that brought this output into existence.

Hayek took a very different view,

Hayek focused on structure of capital. By this he meant the array of complementary (and substitutable) capital goods at different distances from consumable output. These capital goods work with labor and other factors to produce what Keynes would call “aggregate output.” Thus for Hayek “investment” wasn’t a homogeneous aggregate but represented specific changes to a structure of interrelated capital goods. When the central bank lowered interest rates excessively (below the rate that would equate planned savings with planned investment), the structure of production would be distorted. It is not just that “output” increased but its composition was altered.

Rizzo continues,

This typically meant a number of unsustainable changes. Low interest rates discourage savings and yet at the same time encourage certain types of investment. Housing, commodities, and other sectors with long time-horizons would expand. But at the same time consumers would try to consume more. So the Keynesian is misled to think that, “See, consumption and investment are not alternatives. We can have more of both. In fact, consumption stimulates investment!”

What really occurs in the boom, however, is too much consumption and too much investment in sectors far from consumption. Overconsumption and malinvestment. Isn’t this what we have just seen?

Economic think-tank BERL says critics of its report into the social cost of alcohol have a very narrow view of the world.

Economists Eric Crampton and Matt Burgess have labelled the government report as grossly exaggerated after it put the social cost of alcohol at $4.79 billion a year.

They say it was based on bizarre methodology which assumed problem drinkers are incapable of knowing what's good for them.

But BERL Chief Economist Ganesh Nana says the methodology is internationally accepted and more realistic than their critics'. He says their view that consumers are rational beings who make all their decisions with all the information at hand is a narrow way of looking at economics.

Mr Nana says BERL stands by its report.

Internationally accepted by whom? Berl’s report can be reasonably characterized as a New Zealand implementation of a methodology developed by Professors Collins and Lapsley, cited over 100 times in the Berl report. These same authors provided the external peer review of the report. This is being internationally accepted?

And as for realism, just what form of utility function do you need to give the welfare results that Berl seem to be assuming? Berl assumes all harmful alcohol and drug consumption is irrational. Irrational consumers are incapable of detecting private costs in excess of private benefits. To the extent those private costs exceed benefits, they are counted as social costs. In addition they assume that irrational consumers enjoy zero gross (not net) benefits, meaning all private costs are counted as social costs. The second and third assumptions are not justified – they are simply asserted by Berl. This is realism?

Thursday, 18 June 2009

The Ministry of Health and the ACC are standing by the findings of an alcohol-harm study, despite a review of the report that says the study has "few redeeming features".

The $135,000 study, commissioned by the ministry and the ACC, was compiled by Wellington-based economic consultants Business and Economic Research Ltd (BERL).

The study's findings have been savaged by economists Eric Crampton, of the University of Canterbury, and Matt Burgess, from Victoria University, in a review released yesterday.

The study put the annual social cost of alcohol abuse at $4.79 billion, but Crampton and Burgess said their research found the cost was $146 million.

They also found fault with BERL's analysis and methodology, and said the report had elementary errors and misunderstandings of economics.

It goes on to note that,

BERL economist Ganesh Nana, one of the authors of the report, defended his work, saying the university economists had "a different world view" from his colleagues.

"A different world view"??? That's why your numbers are so weird, you have a "different world view". Just what in hell does that even mean?

Nana is also quoted as saying

"We used the method used by all in the field looking at the cost of alcohol and other addictive substances. That method has been used widely across the world," he said.

"It really does depend on whether you believe, in a nutshell, that consumers are rational in their decisions about how much alcohol to drink. "Most of us are we're not saying we aren't but there is a significant subset of the population who aren't, and that's where the costs lie."

Errr no, you go way past that. You make very strong assumptions about people being "irrational" when it comes to alcohol. You seem to be assuming that all consumption by men who drink more than two pints of beer per day (one pint for the ladies) is harmful and thus irrational, and consequently has only costs.

And what did the government agencies have to say,

Barbara Phillips, group manager of the Health Ministry's minimising harm group, said: "The ministry is aware that other studies have been conducted using different assumptions which put a different cost on alcohol and illegal drugs," she said.

"The ministry will consider the relevance of those studies in future policy work."

Does this mean, business as usual, we will ignore it as it doesn't confirm our prejudices?

ACC general manager of injury prevention Katie Sadleir said the corporation had just received the review and was not able to comment yet.

"However, from ACC's perspective, we do know that alcohol is a factor in many of the claims we receive," she said.

"Our own research a few years ago found that up to 22 per cent of all ACC claims had alcohol as a contributing factor. Given that we pay nearly $3 billion a year in claims, that means the cost of alcohol-related claims to ACC alone is around $650 million each year."

Actually no. This could be true if alcohol was the only contributing factor but given that there are likely to be a number of other factors involved in these claims you can't just attribute all of the cost of the claims to alcohol. The question is what is the marginal effect of alcohol in these claims? Alcohol could be neither necessary nor sufficient for any given claim.

Milton Friedman. Friedman had a solid MV = PQ doctrine from which he deviated very little all his life. By the way, he's about as smart a guy as you'll meet. He's as persuasive as you hope not to meet. And to be candid, I should tell you that I stayed on good terms with Milton for more than 60 years. But I didn't do it by telling him exactly everything I thought about him. He was a libertarian to the point of nuttiness. People thought he was joking, but he was against licensing surgeons and so forth. And when I went quarterly to the Federal Reserve meetings, and he was there, we agreed only twice in the course of the business cycle. .

That's asking for a question. What were the two agreements?

When the economy was going up, we both gave the same advice, and when the economy was going down, we gave the same advice. But in between he didn't change his advice at all. He wanted a machine. He wanted a machine that spit out M0 basic currency at a rate exactly equal to the real rate of growth of the system. And he thought that would stabilize things.

Well, it was about the worst form of prediction that various people who ran scores on this -- and I remember a very lengthy Boston Federal Reserve study -- thought possible. Walter Wriston, at that time one of the most respected bankers in the country and in the world fired his whole monetarist, Friedmaniac staff overnight, because they were so off the target.

But Milton Friedman had a big influence on the profession -- much greater than, say, the influence of Friedrich Hayek or Von Mises. Friedman really changed the environment. I don't know whether you read the newspapers, but there's almost an apology from Ben Bernanke that we didn't listen more to Milton Friedman.

But anyway. The craze that really succeeded the Keynesian policy craze was not the monetarist, Friedman view, but the [Robert] Lucas and [Thomas] Sargent new-classical view. And this particular group just said, in effect, that the system will self regulate because the market is all a big rational system.

Those guys were useless at Federal Reserve meetings. Each time stuff broke out, I would take an informal poll of them. If they had wisdom, they were silent. My profession was not well prepared to act.

And this brings us to Alan Greenspan, whom I've known for over 50 years and who I regarded as one of the best young business economists. Townsend-Greenspan was his company. But the trouble is that he had been an Ayn Rander. You can take the boy out of the cult but you can't take the cult out of the boy. He actually had instruction, probably pinned on the wall: 'Nothing from this office should go forth which discredits the capitalist system. Greed is good.'

However, unlike someone like Milton, Greenspan was quite streetwise. But he was overconfident that he could handle anything that arose. I can remember when some of us -- and I remember there were a lot of us in the late 90s -- said you should do something about the stock bubble. And he kind of said, 'look, reasonable men are putting their money into these things -- who are we to second guess them?' Well, reasonable men are not reasonable when you're in the bubbles which have characterized capitalism since the beginning of time.

Wednesday, 17 June 2009

Reported costs of alcohol abuse "grossly exaggerated" according to economists

A widely publicised $135,000 government report on the cost of drug and alcohol abuse has been slammed by two economists, who say the report’s findings are grossly exaggerated.

Economists Eric Crampton and Matt Burgess have released a research paper which examines the report, by Wellington economics consultant Business and Economic Research Limited (BERL), after Law Commission President Sir Geoffrey Palmer cited its findings in support of proposed new regulations on alcohol.

“What we found shocked us. BERL exaggerated costs by 30 times using a bizarre methodology that you won’t find in any economics textbook,” Dr Crampton said.The BERL report was commissioned in 2008 by the Ministry of Health and ACC, and put the annual social costs of alcohol at $4.79 billion. Crampton and Burgess said the net social costs instead amounted to $146 million – 30 times lower than that calculated in the report.

“BERL has virtually assumed its answer. The majority of the reported social costs rest on two very strange assumptions which BERL has asserted without any reason or evidence,” said Dr Crampton said.

“The report assumes that one in six New Zealand adults drinks because they are irrational; that is, they are incapable of deciding what is good for themselves. BERL further assumes that these individuals receive absolutely no enjoyment, social or economic benefit from any of their drinking,” Dr Crampton said.

“These assumptions allowed BERL to count as a cost to society everything from the cost of alcohol production to the effect of alcohol on unpaid housework. That’s bad economics.”

Among other serious flaws, Dr Crampton said the report’s external peer review was done by the authors of the report’s own methodology, important findings in academic literature that alcohol had health and economic benefits were ignored, BERL did not properly warn readers about the limitations of its methodology, and used language in the report that was frequently misleading.

The BERL study caught the economists’ attention when it was cited by the Law Commission as the basis for supporting proposed new taxes and regulations on alcohol.“We’re doing this because we don’t want to see legislative decisions being misguided by bad research. In our view, the Law Commission should give no weight at all to the findings in the BERL report,” Dr Crampton said.

Dr Crampton stressed their review wasn’t an attack on the “very real” issues of alcohol and drug abuse. “These are deep problems, but rather than being taken seriously they have instead been trivialised by numbers that beg the question,” Dr Crampton said./ENDS

The review of BERL’s report is available for download here; supporting calculations available here. The BERL report is available here.

Eric Crampton is Senior Lecturer in Economics at the University of Canterbury. Matt Burgess is a Research Associate at the Institute for the Study of Competition and Regulation. The views expressed solely reflect those of the authors, and do not necessarily represent those of the institutions with which they are affiliated or their constituent members.

Tuesday, 16 June 2009

BK Drinkwater has commented on the report "Economic Impact of Super City Amalgamation on Peripheral Councils (Manukau, Waitakere, Rodney, North Shore, and Franklin)" by Rhema Vaithianathan. I think he is right with regard to his generalised concerns about multiplier effects. Multiplier type argument always seem strange to me. Eric Crampton says about the same report,

Rheema Vaithianathan, Auckland University economist, worries that the Auckland supercity will lead to job losses and in particular will hurt the suburbs.

I'm worried that the opposite will happen.

And there are good reasons for his concern.

When I gave the report a very quick look, one thing I noticed was the following comment,

The analysis above has assumed no change in rates. However, the extent to which efficiency savings flow into reduced rates is uncertain. There is considerable evidence that following privatisation, efficiency savings did not in fact flow through to the customers as reduced prices but to executives as increased compensation8. For example following privatization in the UK Electricity Industry, top executive salaries increased three-fold.

This looks at bit odd to me since I don't see the relationship between privatising the UK electricity industry and the rates in an Auckland Supercity. Second, from the quick look at the paper mentioned in footnote 8, it appears that salary increases were due to rent extraction by executives of the privatised companies based on the way the industry was being regulated. Again I'm not sure what UK electricity regulation has to do with the Auckland Supercity.

Also there is a comment in the Wolfram paper,

On the one hand, the small data set limits both the power of some of my statistical tests and the generality of my results.

So the Wolfram results don't generalise and thus are they in any way relevant to the Auckland Supercity?

I'm guessing Vaithianathan is right that there will be no change in rates, but I'm sure the above argument makes this case. In fact as Eric argues rates could go up as expenditures increase, not decrease.

Charles Platt, author and journalist, talks with EconTalk host Russ Roberts what it was like to apply for a job at Wal-Mart, get one, and work there. He discusses the hiring process, the training process, and the degree of autonomy Wal-Mart employees have to change prices. The conversation concludes with a discussion of attitudes toward Wal-Mart.

Offshoring has become one of those things everybody knows nothing about, but they still somehow manage to hate it. Offshoring seems to be one of the scarcest things on rich nations’ economic radar screens, all of our good jobs being sent overseas. In the US economist Alan Blinder was one of the first to point out the threat in his 2006 Foreign Affairs article “Offshoring: The Next Industrial Revolution?” In this article he said,

constant improvements in technology and global communications virtually guarantee that the future will bring much more offshoring of ‘impersonal services’’— that is, services that can be delivered electronically over long distances with little or no degradation in quality.

More recently Blinder has produced some estimates of the size of the revolution. And they make it look like "the big one". Blinder (2009): "I estimated that 30 million to 40 million US jobs are potentially offshorable."

Richard Baldwin takes issue with the implications of those numbers. In a column at VoxEU.org Baldwim writes

The trouble is that his numbers are being interpreted in the light of the “old paradigm” of globalisation – the world of trade theory that existed before Paul Krugman, Elhanan Helpman, and others led the “new trade theory” revolution in the 1980s.

Baldwin expands on this point by saying

Krugman’s contribution, which was rewarded with a Nobel Prize in 2008, was to crystallise the profession’s thinking on two-way trade in similar goods.1 This was a revolution since the pre-Krugman received wisdom assumed away such trade or misunderstood its importance. In 1968, for example, Harvard economist Richard Cooper noted the rapid rise in two-way trade among similar nations and blamed it for the difficulty of maintaining fixed exchange rates. Using the prevailing trade theory orthodoxy, he asserted that this sort of trade could not be welfare-enhancing. And since it wasn’t helping, he suggested that it should be taxed to make it easier to maintain the world’s fixed exchange rate system – a goal that he considered to be the really important thing from a welfare and policy perspective (Cooper, 1968).

Trade economists back then took it as an article of faith that trade flows are caused by macro-level differences between nations – for example, national differences between the cost of capital versus labour. Nations that had relatively low labour costs exported relatively labour intensive goods to nations where labour was relatively expensive.

This is the traditional view that Blinder seems to be embracing.

What Krugman (especially Krugman 1979, 1980) showed was that one does not need macro-level differences to generate trade. Firm-level differences will do.

In a world of differentiated products (and services are a good example of this), scale economies can create firm-specific competitiveness, even between nations with identical macro-level determinants of comparative advantage. Krugman, a pure theorist at the time, assumed that nation’s were identical in every aspect in order focus on the novel element in his theory (and to shock the “trade is caused by national differences” traditionalists). His insight, however, extends effortlessly to nations that also have macro-level differences, like the US and India.

This now brings us to interpreting Blinder’s 30 to 40 million offshorable jobs figure. Baldwin argues that,

Blinder’s approach is easy to explain – a fact that accounts for much of its allure as well as its shortcomings.

Step 1 is to note that Indian wages are a fraction of US wages.

Step 1a is to implicitly assume that Indians’ productivity-adjusted wages are also below those of US service sector workers, at least in tradable services.

Step 2, and this is where Blinder focused his efforts, is to note that advancing information and communication technology makes many more services tradable. The key characteristic, Blinder claims, is the ease with which the service can be delivered to the end-user electronically over long distances.

Step 3 (the critical unstated assumption, if not by Blinder, at least by the media reporting his results) is that the new trade in services will obey the pre-Krugman trade paradigm – it will largely be one-way trade. Nations with relatively low labour costs (read: India) will export relatively labour-intensive goods (read: tradable services) to nations where labour is relatively expensive (read: the US).

Note in passing the comment in Step 3: "the critical unstated assumption, if not by Blinder, at least by the media reporting his results". The media reporting on such issues often overlooks important details.

The catch in all of this? This last step is factually incorrect. That is step is wrong is shown in recent work by Mary Amiti and Shang-Jin Wei (2005). Baldwin continues,

They note: “Like trade in goods, trade in services is a two-way street. Most countries receive outsourcing of services from other countries as well as outsource to other countries.”

Source: Author’s manipulation of data from Amiti and Wei (2005), originally from IMF sources on trade in services. [Click to make larger]

The US, as it turns out, is a net “insourcer”. That is, the world sends more service sector jobs to the US than the US sends to the world, where the jobs under discussion involve trade in services of computing (which includes computer software designs) and other business services (which include accounting and other back-office operations).

The chart shows the facts for the 1980 to 2003 period. We see that Blinder is right in that the US importing an ever-growing range of commercial services – or as he would say, the third industrial revolution has resulted in the offshoring of ever more service sector jobs. However, the US is also “insourcing” an ever-growing number of service sector jobs via its growing service exports. The startling fact is that not only is the trade not a one-way ticket to job destruction, the US is actually running a surplus.

Baldwin's conclusion from all of this,

None of this should be unexpected. The post-war liberalisation of global trade in manufactures created new opportunities and new challenges. To apply Blinder’s logic to, say, the European car industry in the early 1960s, one would have had to claim that since the German car industry (at the time) faced much lower productivity-adjusted wages, freer trade would make most French auto jobs “lose-able” to import competition. Of course, many jobs were lost when trade did open up, but many more were created. As it turned out, micro-level factors allowed some French firms to thrive while others floundered, and the same happened in Germany. Surely the same sort of thing will happen in services, as trade barriers in that sector fall with advancing information and communication technologies.

In short, what Blinders’ numbers tell us is that a great deal of trade will be created in services. Since services are highly differentiated products, and indivisibilities limit head-to-head competition, my guess is that we shall see a continuation of the trends in the chart. Lots more service jobs “offshored” and lots more “onshored”.

Overall, offshoring isn't one of the scarcest things imaginable. In fact its something that should be embraced.

It's Germany on a global scale that is the concern. We worry about the drag on world demand from the global savings coming out of east Asia and the Middle East, but within Europe there's a European savings glut which is coming out of Germany. And it's much bigger relative to the size of the economy.

What? Saving in now bad? Aren't we always being told that the current account deficit means we don't save enough? Doesn't the US-and New Zealand- have a problem with a lack of savings? So is saving good or bad?

Earlier I commented on a question raised by Russ Roberts on the usefulness of empirical work in economics, see here and here. Roberts has now added another post on Fancy empirical work.

Roberts notes the responses from Cowan and Caplan-see here-and also notes an email he received which suggested suggests Time on the Cross, Fogel and Engerman's study of slavery. Roberts then goes on to say,

Most or all of these observations miss the point, or at least the point I was trying to make.

Empirical work is very important.

Facts matter.

A careful study of the facts can have tremendous influence.

Sophisticated regression analysis can narrow our guesses as to magnitudes. But I don't think we need fancy regression to conclude that people aren't always rational. Or that police can reduce crime. Or to look at the nature of resale price maintenance. On the stuff where people have priors and bias—such as the dynamic impact of taxes on revenue—I don't think the empirical evidence is very convincing of the skeptic.

I understand that science moves slowly and that people at the margin are who eventually count.

But I really don't think the empirical record of sophisticated empirical work is very impressive. In fact, I think I could make a case that sophisticated empirical work is most productive for publishing papers and less productive at establishing truth or useful findings that are reliable.

As to establishing truth or results that are reliable, I argued before that the thing to keep in mind is that a single result or paper will not settle a question. But a series of results giving the same answer to the same question involving different data sets, different time periods, different countries, different approaches to answering the question etc will do so. I think what convinces people as the rightness of a result in the accumulation of evidence in its favour. This evidence may or may not involve the kind of sophisticated empirical work Roberts is referring to.

Sunday, 14 June 2009

Having made some comments on the problem of the politicisation of provision when the government is involved earlier, I was interested to come across this article from the New York Times, Lender’s Role for Fed Makes Some Uneasy by Edmund L. Andrews.

Andrews opens his article by noting,

For most of its history, the Federal Reserve has been a high temple of monetary matters, guiding the economy by setting interest rates but remaining aloof from the messy details of day-to-day business.

But the financial crisis has drastically changed the role of the Fed, forcing officials to get their fingernails a bit dirty.

Andrews continues

Since March, when the Fed stepped in to fill the lending vacuum left by banks and Wall Street firms, officials have been dragged into murky battles over the creditworthiness of narrow-bore industries like motor homes, rental cars, snowmobiles, recreational boats and farm equipment — far removed from the central bank’s expertise.

A growing number of economists worry that the Fed’s new role poses risks to taxpayers and to the Fed itself. If the Fed cannot extract itself quickly, they warn, the crucial task of allocating credit will become more political and less subject to rigorous economic analysis.

By this may not be the biggest problem for the Fed. This hands on involvement could undermine the Fed’s political independence and credibility as an institution that operates above the fray.

Executives and lobbyists now flock to the Fed, providing elaborate presentations on why their niche industry should be eligible for Fed financing or easier lending terms.

Hertz, the rental car company, enlisted Stuart E. Eizenstat, a top economic policy official under Presidents Bill Clinton and Jimmy Carter, to plead with both Fed and Treasury officials to relax the terms on refinancing rental car fleets.

Lawmakers from Indiana, home to dozens of recreational-vehicle manufacturers like Gulfstream and Jayco, have been pushing for similar help for the makers of campers, trailers and mobile homes.

And when recreational boat dealers and vacation time-share promoters complained that they had been shut out of the credit markets, Senator Mel Martinez, a Republican from Florida, weighed in on their behalf with the Treasury secretary, Timothy F. Geithner, who promised he would take up the matter with the Fed.

Andrews notes,

The central bank is increasingly having to make politically sensitive choices. For example, it is weighing whether loans to people who buy speedboats and snowmobiles are as worthy of help as those to people who buy cars. And it is being besieged by arguments from R.V. manufacturers and strip-mall developers that they play a crucial role in the economy and also deserve help.

And the lobbying of the Fed, and the Treasury, by industry groups and politicians has increased.

But the Recreational Vehicle Industry Association and Indiana lawmakers — among them, Representative Joseph Donnelly, a Democrat, and Representative Marc Souder, a Republican — were already lobbying the Fed to include loans for recreational vehicles on its list of eligible collateral that the Fed would accept.

They were not alone. Rental car companies were pushing the Fed to finance their fleets. Hertz, which is owned by two private equity firms — the Carlyle Group and Clayton, Dubilier & Rice — hired Mr. Eizenstat to make its case.

In trying to persuade the Fed to relax its loan terms, Mr. Eizenstat led delegations of Hertz officials to both the Treasury and the Fed. They reached out to Ron Bloom, the co-chairman of the Treasury Department’s auto task force, as well as to top aides to Mr. Geithner. They also made detailed financial presentations to Fed officials in Washington and New York.

The Andrew's article also says that

Fed officials say they, too, are uncomfortable with their new role and hope to end it as soon as credit markets return to normal.

But may it not be that the Feds very involvement is one of the things preventing, or at least slowing down, that return to normal?

The task of allocating credit can only become more political and less subject to rigorous economic analysis within this type of framework. When politics rather than economics allocate credit the results are predictable, a misallocation of credit will occur. Businesses that shouldn't get credit, on economic grounds, will obtain it simply because they are better at playing the political game. Those with less lobbying skills, but perhaps better business ideas, will miss out. And ultimately the standard of living of ordinary Americas will suffer because of it.

In this audio from VoxEU.org, Christopher Ruhm of the University of North Carolina at Greensboro talks to Romesh Vaitilingam about his work with Charles Baum, which analyses data from the US National Longitudinal Survey of Youth to explore how body weight and obesity change with age and how that relates to socioeconomic status.

N. Gregory Mankiw and Matthew Weinzierl ask Do you really want to tax ability? Should the income tax system include a tax credit for short taxpayers and a tax surcharge for tall ones? This column explains how the standard utilitarian framework for tax policy analysis says that individual attributes correlated with wages, such as height, should determine tax liabilities. Taller individuals should pay higher taxes. If this is objectionable, then something is wrong with the standard framework.

David F. Hendry and J James Reade ask How should we make economic forecasts? A vital challenge confronting economists is how to forecast, especially during a recession because livelihoods depend on those forecasts. This column discusses choosing amongst forecasts and outlines the concerns involved in averaging across models or using general-to-specific model searches.

Tyler Cowen takes up the challenge to answer Russ Robert's question from a posting I noted earlier. Roberts asked,

I'd like one example, please. One example, from either micro or macro where people had to give up their prior beliefs about how the world works because of some regression analysis, ideally usually instrumental variables as that is the technique most used to clarify causation.

I will cite a few possible examples, although I won't stick with instrumental variables:

1. The interest-elasticity of investment is lower than people once thought.

2. We have a decent sense of the J Curve and why a devaluation or depreciation doesn't improve the trade balance for some while.

3. Dynamic revenue scoring tells us over what time horizon a tax cut is partially (or fully) self-financing.

4. Most resale price maintenance is not for goods and services involving significant ancillary services.

5. More policing can significantly lower the crime rate (that one does use instrumental variables).

6. The term structure of interest rates is whacky.

I see other examples but in general I agree with Russ's point that empirical work fails to settle a great number of important disputes, most disputes in fact. Many of the examples I would cite turn out to involve an elasticity being lower than we had thought. And many more involve macroeconomics (rather than micro) than you might expect.

Peter Boettke also posts on the Robert's question, giving a Austrian view. He says,

Does the fact that statisical tests cannot provide unambiguous refutations of economic theory imply that we learn nothing from statistical analysis? I would argue definitely NO, we can learn a lot from statistical analysis. But what we cannot do is "test" theories with statistical tests. Does it also mean that we cannot refute economic theories? Again, I would argue NO. Refutations of "theoretical" propositions result from demonstrations of logical error, and one can also demonstrate the irrelevance of a logical argument to a contemporary problem due to the inapplicability of the theory to the situation to be examined because one or more of the various subsidiary assumptions that make up the network of statements that constitute a theoretical construction might not hold.

I can't meet Russ's challenge. But his challenge is excessive for two reasons:

1. Ending a controversy and creating a consensus almost always takes dozens, if not hundreds, of empirical studies. As well it should - virtually every specific study is open to reasonable doubt.

2. What exactly counts as "ending a controversy and creating a consensus"? Does every active researcher in the world have veto power? Or is convincing two-thirds of them enough? What if the "former opponents" just quietly give up rather than admitting error?

Having made these points, Caplan modifies the Roberts question to ask:

Name a body of empirical work that is so well done, it won over two-thirds of active researchers and induced half of the unconvinced to quietly give up or recant?

His answer to this question is

By this standard, the obvious response for economists is behavioral economics. I don't know any economist under the age of 40 who denies that there are major exceptions to the standard rational actor model. Many older economists are unconvinced, but they don't publish much about it.

A response to the Caplan example could be that while no one argues that exceptions to the standard rational actor model cannot be found in the lab, there are pressures in the real world that means these exceptions are not of great significance. As Steven D. Levitt and John List have put it,

Perhaps the greatest challenge facing behavioral economics is demonstrating its applicability in the real world. In nearly every instance, the strongest empirical evidence in favor of behavioral anomalies emerges from the lab. Yet, there are many reasons to suspect that these laboratory findings might fail to generalize to real markets. We have recently discussed several factors, ranging from the properties of the situation — such as the nature and extent of scrutiny — to individual expectations and the type of actor involved. For example, the competitive nature of markets encourages individualistic behavior and selects for participants with those tendencies. Compared to lab behavior, therefore, the combination of market forces and experience might lessen the importance of these qualities in everyday markets.

I think the thing to keep in mind is that a single result will not settle a question. But a series of results giving the same answer to the same question involving different data sets, different time periods, different countries, different approaches to answering the question etc will do so. I think what convinces people as the rightness of a result in the accumulation of evidence in its favour.

Friday, 12 June 2009

I think too much of modern empirical economics is the economics-free application of sophisticated statistical techniques that does little to actually advance our understanding of the social world. It's not just that it isn't about trade-offs or incentives, the role of trade-offs and incentives are ignored. I also don't think we've made "massive progress" in understanding the social world. We've made massive progress in publishing papers on the social world. But understanding? Not so much. We treat the natural world as if the sophisticated tools of statistics can turn reality into a natural experiment. But the world is usually (always?) too complex for the results to be reliable.

You won't get an argument out of me on this one. I have often thought that a lot of empirical economics these days is just applied stats. Roberts continues,

I continue to ask the question: name an empirical study that uses sophisticated statistical techniques that was so well done, it ended a controversy and created a consensus—a consensus where former opponents of one viewpoint had to concede they were wrong because of the quality of the empirical work.

A good question that I'm willing to bet no one can answer.

Larry Summers argued in his paper, "The Scientific Illusion in Empirical Macroeconomics", Scandinavian Journal of Economics, 1991, v. 93, iss. 2, pp. 129-48, that formal econometric work, where elaborate technique is used to apply theory to data or isolate the direction of causal relationships when they are not obvious a priori, virtually always fails. He went on to argue that the only empirical research that has contributed to thinking about substantive issues and the development of economics is pragmatic empirical work, based on methodological principles directly opposed to those that have become fashionable in recent years.

Anna Schwartz was born in 1915 in New York, but despite this fact she can still be found nearly every day at her office in the National Bureau of Economic Research where she has been tirelessly working, gathering data, since 1941.

This lesson of the recent past seems all but forgotten, Schwartz says. Instead of staying the monetarist course, Volcker’s successor as Fed chairman, Alan Greenspan, too often preferred to manage the economy—a fatal conceit, a monetarist would say. Greenspan wanted to avoid recessions at all costs. By keeping interest rates at historic lows, however, his easy money fueled manias: first the Internet bubble and then the now-burst mortgage bubble. “A too-easy monetary policy induces people to acquire whatever is the object of desire in a mania period,” Schwartz notes.

Greenspan’s successor, Ben Bernanke, has followed the same path in confronting the current economic crisis, Schwartz charges. Instead of the steady course that the monetarists recommend, the Fed and the Treasury “try to break news on a daily basis and they look for immediate gratification,” she says. “Bernanke is looking for sensations, with new developments every day.”

On Ben Bernanke Schwartz is also quoted as saying,

Bernanke is right about the past, Schwartz says, “but he is fighting the wrong war today; the present crisis has nothing to do with a lack of liquidity.”

Sorman goes on to write,

President Obama’s stimulus is similarly irrelevant, she believes, since the crisis also has nothing to do with a lack of demand or investment. The credit crunch, which is the recession’s actual cause, comes only from a lack of trust, argues Schwartz. Lenders aren’t lending because they don’t know who is solvent, and they can’t know who is solvent because portfolios remain full of mortgage-backed securities and other toxic assets.

To rekindle the credit market, the banks must get rid of those toxic assets. That’s why Schwartz supported, in principle, the Bush administration’s first proposal for responding to the crisis—to buy bad assets from banks—though not, she emphasizes, while pricing those assets so generously as to prop up failed institutions. The administration abandoned its plan when it appeared too complicated to price the assets. Bernanke and then–Treasury secretary Henry Paulson subsequently shifted to recapitalizing the banks directly. “Doing so is shifting from trying to save the banking system to trying to save bankers, which is not the same thing,” Schwartz says. “Ultimately, though, firms that made wrong decisions should fail. The market works better when wrong decisions are punished and good decisions make you rich.”

What of the often cited problem of deflation rather than inflation?

Should we worry about inflation when some believe deflation to be the real enemy? “The risk of deflation is very much exaggerated,” she answers. Inflation seems to her “unavoidable”: the Federal Reserve is creating money with little restraint, while Treasury expenditures remain far in excess of revenue. The inflation spigot is thus wide open. To beat the coming inflation, a “new Paul Volcker will be needed at the head of the Federal Reserve.”

One could add that a new Don Brash will be needed to head the Reserve Bank here as well.