Thursday, 31 March 2011

Allen R. Sanderson, who teaches economics at the University of Chicago, has had enough. For too long, he says, the U.S. has been far too dependent on the rest of the world for imports of vital commodities. And he has a plan:

My fellow Americans,

For too long, the United States of America has been at the mercy of foreign interests — and nations in faraway lands that are often at odds with our core values — when it comes to the production of perhaps the vital resource that drives our economy. We remain far too dependent on this imported commodity that could, in the time of emergency or international political crisis, be denied to us and thus cripple our productivity and reduce us to quivering masses of migraines in a matter of hours. The time for change is now.

And the commodity?

I speak, of course, of our complete dependence on coffee that we are importing mainly from Brazil and Colombia. It's time to wean ourselves from this harmful addiction. My "Coffee Independence" proposal is the key first step.

Thus my administration will propose that we begin immediately to invest in this city [Detroit] and state [Michigan] and turn them into the coffee capital of North America. It will create jobs, jobs, jobs; stimulate economic development; and put Michigan back on the map. After all, it was a beer that made Milwaukee famous, and cows that turned Wisconsin into America's Dairyland. Why not think of Michigan when you think of mocha?

Going without our morning venti half-caf latte and afternoon frappuccino grande will take some time to get used to, of course. As will building the hothouse infrastructure, turning seedlings into hearty trees; and fully implementing our "Cash for Coffee" stimulus program. And until those beans can be picked by American workers who are paid a living wage, have great health care benefits, 40l(k)s and union representation, this will call for shared sacrifice.

To complement this initiative, I will also propose to Congress that we invest in Florida orange juice production, Nicorette gum and California wines, all 100 percent American products. (And we can thus reduce Brazil to a nation known only for its Carnival, bikini waxes and getting suckered into hosting the 2016 Olympic Games.)

Once fully implemented, we will then turn our full attention to growing cocoa in New Hampshire, a state that figures prominently in the 2012 primaries, instead of importing our secondary caffeine and fat additions — chocolate — from the Ivory Coast and Ghana. After that we will move on the idiom — "For all the tea in China" — and have farmers in another early primary state, Iowa, convert some of their corn (aka ethanol) acreage to tea, thus stopping the flow of American dollars to China and India.

And then for the final phase, I am fully prepared to give new meaning to the term "Banana Republic."

Wednesday, 30 March 2011

Mark Pennington writes on a recent judgement from the European Court of Justice. Pennington says,

Classical liberals claim that theories of justice must be judged by their practical capacity to facilitate positive sum games in society and to eliminate scope for the exercise of inconsistent and arbitrary political power. Unfortunately, as one of the recent rulings by the European Court of Justice reveals few people in today’s legal and political elites are willing to conceive justice in this regard. Three weeks ago the European Court ruled that it was inadmissible for car insurance providers to take into account sex-specific differences for risk assessment and actuarial purposes on the grounds that this breached the fundamental ‘right to equal treatment’ for men and women. As a consequence, European women will no longer be able to benefit from cheaper driving insurance resulting from their lesser likelihood of involvement in automobile accidents than men of equivalent age and experience. It is difficult to see how this decision is compatible with a positive sum view of society. Men will not be made any better off by the decision as their insurance premiums will at best remain unchanged and women will be made worse off as their previously cheaper premiums will now be equalised upwards in line with those of men.

The European Court could have made a better decision had they bothered to read a paper by an old (yes, I really did mean to emphasis the "old" in this sentence) teacher of mine, Alan Woodfield. Back in 2000 Alan published a paper in New Zealand Economic Papers on "Preventing Insurance Markets from Separating into Gender-Dominated Price-Coverage Combinations". (New Zealand Economic Papers, 34(2), 2000, 243-268). The abstract reads:

This article first examines the extant literature on regulatory attempts to prohibit gender-based risk categorization in insurance markets as adopted in a number of countries and proposed in New Zealand's Human Rights Bill 1992 (but not subsequently enacted). The literature suggests that regulators' aspirations and expected outcomes may not materialize. Stronger regulations that effectively impose unisex pricing requirements at either the level of the individual firm or the market level, and which attempt to prevent markets separating into price-coverage combinations dominated by one or other gender, are then evaluated. While these raise the likelihood that targeted gender groups or the majority of their members are made better off via access to pooling contracts, the desired results are still not guaranteed. A variety of outcomes are possible, including pooling contracts that make no insured person better off and separating contracts that make targeted groups better off. Outcomes are sensitive to various parameters and also to the concepts of equilibrium deemed appropriate to the problem.

In conclusion Alan writes,

This article has examined the nature of contracts and welfare implications of interventions in competitive insurance markets which attempt to compensate for gender-based differences in risk. Although these interventions fail to enhance efficiency, they might be expected to raise the welfare levels of those agents allegedly suffering discrimination in insurance markets. For the case where females, for example, are uniformly riskier than males, it is shown that females may not necessarily be better off as a result of regulation, although they will be so in a number of situations, whereas males are always worse off. Where females are riskier on average, but some females are low-risk types and some males are high-risk types, it is shown that if the information required to assign individuals to the correct risk class is prohibitively costly for insurers to obtain, then while high-risk females are typically better off as a result of regulation, they need not be so, and can even be worse off, compounding the inefficiencies associated with adverse selection. The welfare effects for low-risk females are extremely variable, and are critically dependent on the specific form of the regulation, the underlying parameters of the problem, and the choice among alternative myopic (Nash) and non-myopic (Riley, Wilson, Spence-Miyazaki) concepts of equilibrium. Further, equilibrium will not always be characterized by pooling rather than separating contracts even when unisex insurance prices prevail everywhere, and where pooling contracts do prevail, regulators cannot be assured that all members of a disadvantaged gender group will be made better off by regulation.

I'm guessing that the European Court of Justice did not think the issue through. The set of possible outcomes is more complex that the court seems to think. And the outcome they did pick looks like one of the worst.

In 2009, the German constitution was amended to stop the federal and state governments from running budget deficits. The plan in Germany has been set over an eleven-year period. From 2016, governments won’t be able to run a deficit of more than 0.35% of GDP and from 2020 a deficit won’t be allowed to run at all. In America, 49 states have some form of a balanced budget provision (the exception is Vermont). In Oregon, the law forbids a state surplus of more than 2% of GDP. If there is one, anything above this threshold is refunded to taxpayers. The Federal government does not have a cap on its spending at all, but there have been various balanced budget proposed. This has been brought to Congress within the 1991-1992, 2001-2002 and 2005-2006 sessions. These have failed due to the difficulty to pass amendments, which needs a two-thirds majority in both houses in Congress and three-quarters of states ratifying it.

I'm guessing Ganesh Nana would not be too keen on the idea but I think many other economists would give the idea support. The size of budget deficits being run in many countries is of concern to economists. Recently in the U.S., for example, 10 former chairs of the President's Council of Economic Advisers wrote,

There are many issues on which we don’t agree. Yet we find ourselves in remarkable unanimity about the long-run federal budget deficit: It is a severe threat that calls for serious and prompt attention.

Given such concerns that the idea of a balanced budget law being discussed is understandable and not without merit.

Now, I believe the [broken window] fallacy is indeed a fallacy, and I find the idea that Japan might somehow gain from this bout of terrifying havoc and mass death both ridiculous and disgustingly Panglossian. But, really, this isn't about us, and I've grown weary of playing a part in the rote broken-windows Punch and Judy show. Much more pertinent and interesting is the fascinating, lively, empirically-informed academic literature on the economic effects of disasters, which I was reading up on last night. Alas, the New York Times' Binyamin Appelbaum beat me to the punch, providing a short overview of some recent research that finds that disasters have no long-term affect on GDP. This excellent 2008 Boston Globe article by Drake Bennett offers a more comprehensive summary.

So an earthquake in either Japan or Christchurch will do nothing to improve economic performance in either place.

Dan [Stastny] concludes this wonderful book [The Economics of Economics] by informing his readers that for economics to be respected and to have its teachings heeded, "it may not only needs its Samuelsons, Friedmans or Hayeks, but also its Cobdens, Brights, and Bastiats. When economists figure this out, there will be a better chance that they may at last become as important as garbagemen, at least in the eyes of those who consider handling of ideas as momentous as handling of garbage."

While Stastny is right about the importance of communicating economic ideas to the general public, I'm not sure that its the job of economists, as such, to do it. Specialisation and the division of labour suggests to me that there are advantages to people doing whatever is their comparative advantage. In general it seems to me that there gains to be exploited when economists write for their peers, while economic journalists write for the general public. Would we really have gained anything if Ronald Coase had spent his time explaining his "The Nature of the Firm" paper to the general public rather than writing "The Problem of Social Cost" or Henry Hazlitt had written journal articles on mechanism design rather than "Economics in One Lesson"? We have to ask, What is the opportunity cost of having economists try to communicate directly with all of their "scientific peers, students, policy makers, and the general public", as Boettke suggests. Trying to create people with a comparative advantage in all four areas defies the economic logic behind specialisation. You can't have a comparative advantage in everything.

Tuesday, 29 March 2011

Vincent Reinhart of the American Enterprise Institute talks with EconTalk host Russ Roberts about the government interventions and non-interventions into financial markets in 2008. Conventional wisdom holds that the failure to intervene in the collapse of Lehman Brothers precipitated the crisis. Reinhart argues that the key event occurred months earlier when the government engineered a shotgun marriage of Bear Stearns to JP Morgan Chase by guaranteeing billion of Bear's assets and sending a signal to creditors that risky lending might come without a cost. Reinhart argues that there is a wider menu of choices available to policy makers than simply rescue or no rescue, and that it is important to take action before the crisis comes to a head.

A more appropriate narrative of the financial crisis that exploded in September 2008 would begin with how the Corps of Financial Engineers—comprising chiefly the Secretary of the Treasury, the Chairman of the Federal Reserve, and the President of the Federal Reserve Bank of New York—inserted the government into the resolution of the investment bank Bear Stearns in March 2008. The financial authorities interpreted the death throes of the mid-sized investment bank as a problem of systemic importance and, with an ill-considered and unprecedented decision, intervened in a way that protected the uninsured creditors of Bear Stearns and raised the expectations of future bailouts. When the same Corps of Financial Engineers then failed to intervene in September 2008, Lehman Brothers entered bankruptcy. The resulting market seizure was in large part a counter-reaction based on the prior official decision just six months earlier to protect Bear Stearns.

Many observers, including the Secretary of the Treasury at that time Hank Paulson (2010) and Federal Reserve Chairman Ben Bernanke (2010), have looked back at the decision to let Lehman slip into bankruptcy on September 14, 2008, with regret and bemoaned the lack of tools available to them at the time to prevent the outcome. I will argue that Lehman’s failure had widespread consequences because of the false hopes engendered by Fed support to Bear Stearns. Instead of asking “Why not save Lehman?” a more useful and consequential question is “Why save Bear Stearns?”

The Ministry of Health should look into paying parents to encourage them to vaccinate their children, a report into New Zealand's lagging immunisation rates says.

and

The report ruled out making immunisation compulsory but directed the Ministry of Health to consider immunisation incentive payments to parents, or linking existing parental benefits to immunisation.

The current incentive scheme offers payments to primary health organisations for reaching certain immunisation targets but no direct payments to parents.

In Australia, parents on any income are eligible for two payments of $A122.75 ($NZ166.91) if they ensure their children have met immunisation schedule requirements by certain ages.

It would be interesting to see just what effect payment would have. Are monetary payments the best form of incentive in situations like this? What exactly stops parent from having their kids immunised? The incentives of the parent and the kids would seem to be aligned well enough to get the parents to have the kids immunised just for the kids sake.

Lanny's death naturally puts us at Reason in a reflective mood. The strangest part of it is that, though we're all his heirs and beneficiaries, nobody currently working at the magazine ever met him. It's been that way for a long time. As Virginia Postrel, editor in chief of Reason from 1989 until 2000, told me via email, not only had she never met him, but during her time at the mag, folks didn't even know what had happened to him.

Monday, 28 March 2011

A question that is often asked with regard to productivity across countries is Why is the U.S. more productive than the E.U.? Studies have suggested that much of the difference can be explained by the wider use of information and communication technologies in the U.S. But this just raises the obvious question Why does the U.S. use these technologies more? A new column at VoxEU.org provides new evidence suggesting the answer may lie in differences in employment protection legislation.

The column, Employment protection and technology choice by Eric Bartelsman Joris de Wind and Pieter Gautier notes that until the mid-1990s, E.U. productivity had been converging towards U.S. productivity. But since then, U.S. productivity growth has accelerated and the U.S.-E.U. gap has widened. Robert Gordon is one economist who has noted this fact:

[...] since 1995 Europe has experienced a productivity growth slowdown while the United States has experienced a marked acceleration. As a result, just in the past eight years, Europe has already lost about one-fifth of its previous 1950-95 gain in output per hour relative to the United States. Starting from 71 percent of the U. S. level of productivity in 1870, Europe fell back to 44 percent in 1950, caught up to 94 percent in 1995, and has now fallen back to 85 percent. (Gordon 2007: 176).

One factor that has been put forward to explain this productivity difference is the production and use of information and communication technologies (ICT). Such activity is much lower in the E.U. than in the U.S.

Bartelsman, de Wind and Gautier continue,

Why has the adoption of the new ICT been much slower in the EU? Recent research in Brynjolfsson et al. (2008) and our latest paper (Bartelsman et al. 2010) provides evidence that the adoption of these new technologies is associated with an increase in the variance of firm productivity. For example, implementation of advanced business software like SAP and Oracle requires a new organisational structure and the outcome is inherently uncertain. The variance of firm productivity is therefore relatively large in sectors that intensively use ICT.

For a given firm, adopting a technology with risky outcomes is attractive because the benefits can be scaled up if the outcome is good, while firms can fire workers or exit if things go poorly. Essentially, the ability to close a production unit is a real option that bounds the downward risk.

But what is the role of labour market policy? Bartelsman, de Wind and Gautier explain that one major policy difference between Europe and the U.S. is that employment protection legislation is much stricter in Europe. They

[...] show that the employment share of risky (ICT-intensive) sectors is indeed smaller in the EU than in the US, and that, within Europe, high-protection countries have relatively smaller ICT-intensive sectors than low-protection countries. We then find that countries with strict legislation are relatively less productive.

Bartelsman, de Wind and Gautier go on to say,

In order to explore the mechanism and to establish how much of the US-EU productivity divergence can be explained by stricter employment legislation, we develop a two-sector matching model with endogenous technology choice, i.e. firms can choose between a safe sector with stable productivity and a risky sector with productivity subject to sizable shocks. In the absence of employment protection legislation, the risky sector is relatively attractive because firms have the option to fire workers which bounds the downward risk. Introducing legislation makes it less attractive to use risky technologies, so this establishes the negative relationship between employment protection and the size of the risky sector. Legislation also results in more labour hoarding, i.e. the productivity threshold below which a worker is fired is lower if legislation is stricter. Further, the size of the effect increases as the variance of the shocks in the risky sector increases. This explains why productivity growth is lower in high-protection countries in particular when new technologies with a high variance in profitability become available.

So no matter what your views of the origins of employment protection legislation, the research findings Bartelsman, de Wind and Gautier put forward clearly show that the economic costs of employment protection increase with change, over time, in the type of technological opportunities available, but the benefits are unaffected.

If these research results are right then I can't help but think that they have serious implication for New Zealand, and if we really want to have a "knowledge economy", and catchup with Australia, then we should not ignore such findings.

There are many issues on which we don’t agree. Yet we find ourselves in remarkable unanimity about the long-run federal budget deficit: It is a severe threat that calls for serious and prompt attention.

While the actual deficit is likely to shrink over the next few years as the economy continues to recover, the aging of the baby-boom generation and rapidly rising health care costs are likely to create a large and growing gap between spending and revenues. These deficits will take a toll on private investment and economic growth. At some point, bond markets are likely to turn on the United States — leading to a crisis that could dwarf 2008.

Of the 10, 4 served under Democratic and 6 under Republican presidents: one worked under Carter, two worked under Reagan, two worked under Clinton, four worked under Bush, and one worked under Obama.

REPEC provides an objective measure of who is "Royalty" in the economics profession. The current list of the top 5% is here. I am ranked #681 out of 27,365 economists so that's not bad (and my 3 books aren't counted here). But, here is the interesting part. There are 39 women who rank in the top 1000 and 0 of them blog. Contrast that with the men. Consider the top 100 men. In this elite subset; at least 8 of them blog. Consider the men ranked between 101 and 200. At least, six of them blog. So, this isn't very scientific but we see a 7% participation rate for excellent male economists and a 0% participation rate for excellent women. This differential looks statistically significant to me.

The answer could be very simple. Its possible that female economists simply have a higher opportunity cost of time and the returns to blogging are low.

Japan has certainly been unfortunate during the past couple of decades. It has had almost twenty years of confidence-destroying slow economic growth, a series of humiliating confrontations with a rising and more aggressive China, and finally the biggest earthquake ever recorded in Japan. And this earthquake not only directly caused great damage, but it also set off a tsunami with huge destructive power. As if this were not enough, the combined earthquake-tsunami badly damaged two nuclear energy plants located on the fault lines and by the sea, damage that led to the release of as yet undetermined amounts of radiation. Still, I do not expect this disaster, despite its severity, to have major effects on the Japanese economy, but it will have a big impact on the nuclear power industry, and may help different countries better prepare for very rare but destructive natural events.

Compared to recent headline-grabbing events, dealing with “behind-the-border barriers” and keeping protectionist tendencies at bay might seem to be small potatoes. This column argues that the “murky protectionism” that affects €100 billion of trade will have profound implications for Europe and the rest of the world, and as such is worthy of attention.

It is broadly agreed that trade liberalisation can increase productivity. The question is how. Earlier literature emphasises the role of firms “learning” to be more productive, whereas recent studies suggest that more productive firms are “stealing” market share from less productive ones, thus raising overall productivity. Presenting evidence from India’s trade liberalisation since 1991, this column finds evidence for both but argues that learning outweighs stealing.

The economist versus the politician on trade. A debate that never happened: "John Stossel and his team at Fox Business tried to get Sen. Brown to publicly debate trade with me. Brown refused, alleging that I'm an unworthy opponent."

Or at least why Gregory K. Dow argues that they can not be owned. In Dow (2003: 107) he writes,

[...] no one can not own a firm because a firm is a set of human agents.

This seems a bit odd. It’s based on an incorrect definition of a firm. First, obviously the idea of the legal ownership of a firm is well established in most countries. Second, as people own firms and they can not own other people then firms must be something other than a group of human agents. This explains why theories, such as the property rights approach to the firm, define ownership in terms of control rights over non-human assets.

PS: If you have an interest in labour managed firms, Dow's book is a must read.

Saturday, 26 March 2011

In the Hart (2003) paper referred to in a previous post the parallels between theories of the firm and of privatisation are discussed in the opening section. The parallels are indeed great. Hart writes,

In spite of these differences, the issues of vertical integration and privatisation have much more in common than not. Both are concerned with whether it is better to regulate a relationship via an arms-length contract or via a transfer of ownership. Given this, one might have expected the literatures to have developed along similar lines. However, this is not so. Whereas much of the recent literature on the theory of the firm takes an 'incomplete' contracting perspective, in which
inefficiencies arise because it is hard to foresee and contract about the uncertain future, much of the privatisation literature has taken a ‘complete’ contracting perspective, in which imperfections arise solely because of moral hazard or asymmetric information.

My own view is that this is unfortunate. One of the insights of the recent literature on the firm is that, if the only imperfections are those arising from moral hazard or asymmetric information, organisational form – including ownership and firm boundaries – does not matter: an owner has no special power or rights since everything is specified in an initial contract (at least among the things that can ever be specified). In contrast, ownership does matter when contracts are incomplete: the owner of an asset or firm can then make all decisions concerning the asset or firm that are not included in an initial contract (the owner has ‘residual control rights’).

Applying this insight to the privatisation context yields the conclusion that in a complete contracting world the government does not need to own a firm to control its behaviour: any goals – economic or otherwise – can be achieved via a detailed initial contract. However, if contracts are incomplete, as they are in practice, there is a case for the government to own an electricity company or prison since ownership gives the government special powers in the form of residual control rights.

Hart is right in what he says about the importance of the incomplete contracts approach to both the theory of the firm and the theory of privatisation. What I find odd about the quote above is the comment,

Whereas much of the recent literature on the theory of the firm takes an 'incomplete' contracting perspective, in which inefficiencies arise because it is hard to foresee and contract about the uncertain future, much of the privatisation literature has taken a ‘complete’ contracting perspective, in which imperfections arise solely because of moral hazard or asymmetric information. (emphasis added)

While it is true that during the 1980s the theory of privatiastion was based around asymmetric information models, which really couldn’t explain the difference between state and private ownership endogenously, since around 1990 this shortcoming has been noted and countered via the application of incomplete contract theories.

The need for incomplete contracting models was highlighted by a number of ownership irrelevance results that apply to complete contracting models. These ownership irrelevance results can be illustrated by considering Sappington and Stiglitz's `Fundamental Theorem of Privatization' (Sappington and Stiglitz 1987) and Williamson's idea of selective intervention (Williamson 1986: Chapter 6). Traditionally the theoretical case for public ownership has rested on considerations of allocative efficiency - that is, the properties of resource allocation in the economy taken as a whole - while the case for private ownership has rested on the incentives and constraints that the market provides to ensure efficiency within the firms - that is, productive efficiency. The Sappington and Stiglitz, and Williamson results show that in a complete or comprehensive contracts world, allocative and productive efficiency will be the same under both public and private ownership. Hence it is not clear what advantages privatisation (or nationalisation) could bring under this framework.

The notion of selective intervention argues that the government can reach the same level of productive efficiency as the private sector by mimicking the actions of a private firm. If the government organises the firm in exactly the same way as a private owner would, if it uses the same incentive schemes for managers and workers, and if it deviates from such a policy only if there is the possibility of doing something strictly better than a private firm, then a nationalised firm should produce at least as efficiently as a privatised one.

Sappington and Stiglitz assume that the government's objective in choosing between public or private production is threefold:

economic efficiency: the government wishes that whoever has the comparative advantage in production undertakes it;

equity: the government has certain distributional objectives;

rent extraction: the government wishes to extract as much of the producers rent as possible.

The 'Fundamental Theorem of Privatization' provides conditions under which all of these objectives can be attained perfectly via an auction whereby the potential producers bid to become the single supplier. It is assumed that the good is produced under conditions of increasing returns to scale so industry costs are minimised with a single producer of the good. The fundamental theorem requires at least two risk-neutral bidders who have symmetric beliefs about the least-cost method of production. Actual costs of production are only learned by the producer just before production takes place. The government has a valuation, v, of the level of output, Q, of the good. This valuation is given by v=V(Q). These conditions are refereed to as the 'ideal setting'.

Sappington and Stiglitz show that a simple auction will ensure that all the government's objectives can be reached perfectly. The government auctions off the right to be the good's sole producer and receive a (total) revenue of P(Q) for producing output level Q. The government sets P(.) equal to its own valuation of the level of output produced, i.e. P(Q)=V(Q). In other words, the production decision is delegated entirely to the producer and the producer is paid an amount exactly equal to the value to the government of the level of output produced.

This means that once actual production costs are revealed a profit maximising firm will face the problem Max V(Q)-C(Q), where C(Q) is the cost function. The result of implementing this scheme is that the firm submitting the highest bid (and thus becoming the producer) will subsequently select the level of production most desired by the government (the welfare maximising output), conditional on the realisation of actual production costs.

If we interpret the government's valuation of output, V(Q), as being gross consumer surplus and assume that production costs have been revealed, then the firm's problem, Max V(Q)-C(Q), is to maximise the sum of producer and consumer surplus which implies productive and allocative efficiency.

With risk neutral firms initially sharing symmetric beliefs about the costs of production, the auction will result in the government capturing all the, ex ante, producer rents. Thus the government can ensure its ideal outcome via delegation of production even without any knowledge of the costs of production.

The argument made above was that Williamson's notion of selective intervention shows that state owned enterprises will be as productively efficient as private firms, in addition to their assumed allocative efficiency, and that the Sappington and Stiglitz theorem shows us that private firms can also be both allocatively and productively efficient. In other words, we have argued that the nature of ownership is irrelevant to the performance of a firm. Other neutrality results are to be found in Shapiro and Willig (1990), and in the context of full corruption, i.e. where unrestricted bribes between the manager of the firm and the politicians are allowed, Shleifer and Vishny (1994) show neither privatisation nor corporatisation matter for the final allocation of resources.

The shortcoming of both arguments is that they are based on the implicit assumption that it is possible to write a complete or a comprehensive contract for the entire life of the firm. To illustrate this point, look again at the Sappington and Stiglitz auction scheme and consider the commitment problems involved. For such an auction to work, the government must, at the time of privatisation, be able to commit itself - and all future governments - to actually paying the social valuation of output, V(Q), to the private owner at all times in the (possibly distant) future. That is, it must be possible to unambiguously specify, in a contract, the government's valuation of production for all possible states of world such that this agreement can be enforced by the courts. Otherwise the private owner will rationally expect that once any necessary relationship specific investments have been made, the government will exploit the fact that such investments are sunk costs and will expropriate the owner's quasi-rents and therefore a private owner will not invest efficiently. This will result in the government's most preferred outcome not being achieved. Selective intervention also fails unless contracts can cover all states of the world. As Williamson comments, ``[t]he impossibility of selective intervention arises in conjunction with efforts to replicate incentives found to be effective in one contractual/ownership mode upon transferring transactions to another. Such problems would not arise but for contractual incompleteness [ ... ]." (Williamson 1996: 178). As with the Sappington and Stiglitz auction there are commitment problems with selective intervention. Here the government must be able to commit itself - and all future governments - to intervene only when its economically advantageous, e.g. to deal with externalities. In particular it must be able to commit not to intervene for political reasons, such as requiring productively inefficient overmanning to reduce unemployment in the run up to an election. Such commitment is credible only if it can be made part of an enforceable contract, which is only possible in a complete or comprehensive contracting environment.

The great advantage of a complete/comprehensive contracts environment is that it allows for the complete depoliticisation of firms. With a contract covering all possible circumstances, political opportunism can be eliminated since there are no contingencies in which politicians are are able to exercise any control rights and thus commitments to non-interference are credible and soft budget constraints can be avoided. Thus in a world where contracts which cover all possible states of the world cannot be written, i.e. if only incomplete contracts are possible, the Williamson and Sappington and Stiglitz results will not hold and allocative and productive efficiency may differ depending on ownership. Within such an incomplete contracts framework firms can be politicised since the politicians, as owners, have residual control rights. A theory of privatisation (or nationalisation) is only possible within such a framework, a necessary condition for a such a theory is having a firm's performance depend on the firm's ownership, and incomplete contracts allow this to happen.

The point here is that all this was known by around 1990 and by the mid-1990s incomplete contract models were turning up in the literature - see, for example, Schmidt, K. (1996a) and Schmidt (1996b), with working paper versions even earlier. Given this it does seem at bit add to be saying in 2003 that "much of the privatisation literature has taken a 'complete' contracting perspective". Incomplete contracting models of privatisation has been available for around 10 years prior to Hart's article.

Shapiro, Carl and Willig, Robert D. (1990). `Economic Rationales for the Scope of Privatization'. In Ezra Suleiman and John Waterbury (eds.), The Political Economy of Public Sector Reform and Privatization, Boulder: Westview Press, 55-87.

A new NBER study of 14 European nations finds that football players tend to locate in countries that have comparatively low income tax rates. This response to tax rates is especially pronounced for the most able and well-paid athletes, and is actually negative for the least able and lowest paid among the professionals. Often, national tax breaks designed to lure top-notch foreign players displace the domestic players in a league.

In Taxation and International Migration of Superstars: Evidence from the European Football Market (NBER Working Paper No. 16545), authors Henrik Kleven, Camille Landais, and Emmanuel Saez construct two models of the labor market for football players in order to determine the top tax rates that nations can levy without driving them out of the country. On the whole, they find that all 14 European nations have rates below these maximizing-revenue tax rates. But the competition for top foreign talent is fierce. And four nations (the United Kingdom, Germany, Greece, and Switzerland) charge foreign players a higher tax rate than the revenue-maximizing rate generated by the model.

By studying football players, the authors hope to begin to address the broader question of how tax rates affect taxpayer behavior. "[F]ootball players are likely to be a particularly mobile segment of the labor market, and our study therefore provides an upper bound on the migration response for the labor market as a whole," the authors conclude. "Obtaining an upper bound is important to gauge the potential importance of this policy question."

In December 1995, the European Court of Justice handed down the so-called "Bosman ruling," which liberalized the market for European football players. Specifically, it eliminated rules that effectively limited the number of foreign players on any one team and practices that discouraged players from moving to another European team once their contract was up. The authors find that the share of foreign players went up dramatically after the Bosman ruling, and the share of domestic players went down in the top leagues of the 14 European nations they examine. Furthermore, in studying teams' performances from 1980 through 2009, they find that low-tax nations had better teams after Bosman. "This suggests that low-tax countries experienced an improvement of club performances by being better able to attract good foreign players and keep good domestic players at home," they write.

Their study also looks at the impact of tax reforms in specific countries. For example, in 2004 Spain introduced the so-called "Beckham Law" (named after British superstar David Beckham, who was one of the first footballers to take advantage of it). It allowed nonresidents to be taxed at a flat rate of 24 percent instead of the progressive rate for residents, whose top marginal rate by 2008 stood at 43 percent. After the law, Spain saw its share of foreign players increase while nearby Italy, which had a similar top league, saw its share of foreign talent shrink. Similarly, Denmark (in 1992) and Belgium (2002) introduced reforms that gave tax breaks to foreign players. Like Spain, their leagues experienced an increase in foreign players. In Greece, after the removal of a tax cap that effectively raised taxes on high earners starting in 1993, Greek players in their prime tended to migrate abroad more often than Greek players in their prime before the change - as well as those Greek players who reached their prime after the tax cap was reinstated (thus lowering taxes). "These observations provide ... compelling evidence of a tax-induced migration response," the authors write.

Yes, taxes really do act as an incentive. In this case, an incentive to migrate.

Friday, 25 March 2011

The basic problem with PPPs is the writing of the contract for whatever project is being undertaken. Depending on what can be contracted on and what can't, PPPs may or may not be the way to go. To, hopefully, make this a little clearer a look at a paper by Oliver Hart is useful.

In Hart (2003) Hart put forward a simple incomplete contracts model of PPPs. Consider a situation where, for example, a government wants a new prison built and run. Lets assume there are two periods, the “build” period followed by the “operate” period. There are (unverifiable) social benefits from the prison which we will call B. The (unverifiable) costs of the prison are denoted C. Both B and C are affected by investments that the builder can make. There are two forms of investment available to the builder, i and e. An increase in i increases B and decreases C and so is beneficial all round, while an increase in e decreases both B and C. This means the builder gains from e and “society” doesn’t. Therefore we will call i productive investment and e unproductive investment. The total costs of investment for the builder are i+e. i and e are unverifiable and thus can not be contracted on.

Now consider types of contracts for building and operating the prison. First, separate contracts for building and operating the prison. Call this unbundling. Second, a PPP contract, where the builder both builds and runs the prison. This is bundling. Under unbundling, the builder sets i=e=0. That is the builder builds the cheapest prison possible while staying within the contract. Under bundling the builder sets the marginal decrease in his costs, C, due to an increase in both e and i each equal to 1.

The trade-off between unbundling and bundling is simple. Under unbundling, the builder internalises neither the social benefit B nor the operating cost C. By setting i=e=0, he does too little of the productive investment, i, but the right amount of the unproductive investment, e. In contrast, under bundling or PPP, the builder again does not internalise B, but does internalise C. As a result, he does more of the productive investment, although still too little, but also more of the unproductive investment.

The model yields a simple conclusion. Conventional provision (‘unbundling’) is good if the quality of the building can be well specified, whereas the quality of the service cannot be. Under these conditions, inderinvestment in i under conventional provision is not a serious issue, whereas overinvestment in e under PPP may be. In contrast, PPP is good if the quality of the service can be well specified in the initial contract (or, more generally, there are good performance measures which can be used to reward or penalise the service provider), whereas the quality of the building cannot be. Under these conditions, underinvestment in i under conventional provision may be a serious issue, while overinvestment in e under PPP is not.

The upsot of all of this is that the choice between a conventional unbundled contract and a PPP bundled contract turns on whether it is easier to write contracts on the operating phase of the prisons life than on the building phase. So the usefulness of PPPs depends on what can or can't be contracted on.

At his blog Roger Kerr asks PPPs: Do They Work? His answer discusses an article on public-private partnerships by economic consultant, Phil Barry, of Taylor Duignan Barry Ltd.

Kerr writes

But do PPPs work? Here is Phil Barry’s assessment:

The formal studies that have been undertaken generally provide a qualified “yes” to that question. I say qualified because the PPPs don’t always work. And even when they do work, the PPPs are by no means perfect.

A study by the UK National Audit Office [...] provided one of the most comprehensive independent evaluations of PPPs. That study found PPPs had their flaws: of the 37 PPP projects evaluated, 9 of the projects (24%) were late and the projects incurred cost-overruns, on average, of 22%. But the experience in the public sector was a lot worse: 70% of the projects were delivered late and the cost overruns averaged 73%.

Public-private partnerships (PPPs) seem to offer a solution to a common problem for economies which have been hampered by the poor quality of their infrastructure. PPPs mean, it is argued, that private capital would be used to fund much-needed projects, whether it be in transport, education, health or whatever. Better still, it was further argued, private companies could build and operate the new infrastructure, bringing large cost savings.

The first modern PPPs were began in the 1980s under what became known as the Private Finance Initiative (PFI). Their numbers grew during the early-mid 1990s, with several design, build, finance and operate (DBFO) road schemes, as well as the construction of a number of privately-operated prisons. These projects were generally viewed as successful within government - a higher proportion were delivered on time and on budget than would have been expected using traditional procurement methods.

Building on these foundations, the election of a New Labour government saw a rapid expansion in the number of PPPs. The model fitted well with Labour’s ‘Third-Way’ approach to the economy. Instead of outright nationalisation, with its well-documented ineffiencies, the dynamism of the private sector would be harnessed for social objectives.

By 2003/04 PPP schemes accounted for 39% of capital spending by UK government departments. And by January 2008 there were over 500 operational PPP projects with a total capital value of around £44 billion and a further number in the pipeline. Their scope was also widened, with a higher proportion used to build new schools and hospitals. Public transport became a major investment priority rather than roads.

But Wellings argues this expansion of PPPs may have been misguided. Indeed, it was arguably when partnerships started to go wrong. In particular, unlike the earlier schemes, the new projects were more likely to be in fields marked by a high level of political sensitivity. Wellings gives as an example of the problems, the London Underground PPP. He writes,

This huge project, designed to upgrade the Tube, required an annual subsidy of £1 billion. Fiercely resisted by the Greater London Authority under Ken Livingstone, who favoured an alternative bond finance scheme, it was imposed on the capital by central government with heavy Treasury backing. So even before it started the process was marked by a high level of controversy.

Extremely complex 30-year contracts were drawn up, at a cost of £455 million in consultancy fees, and the Rail Regulator was appointed as ‘PPP Arbiter’ to adjudicate any disputes. Two consortiums were selected to upgrade and maintain different sections of the network.

In 2003 the Metronet consortium began a £17 billion project covering nine out of twelve tube lines. It soon got into difficulties. In April 2004 it was fined £11 million for poor performance, but this was just the start.

Further fines followed and in June 2007 Metronet, concerned about cost escalation, requested an extraordinary review by the PPP Arbiter. A short-term cost overrun of £551 million was predicted, rising to £2 billion by 2010, and this was blamed on additional demands made by Transport for London.

But the Arbiter had a different view – most of the cost escalation could be explained by Metronet’s inefficiency and only a small fraction of the requested extra payments would be forthcoming. Faced with huge losses, the company went into administration.

The government tried to find private bidders for the Metronet contracts but failed – unsurprisingly given the uncertainty concerning costs. The public sector then became responsible for the upgrades and maintenance. Taxpayers would now pick up the bill for any cost overruns.

The events just described illustrate a key weakness of PPPs. When they involve essential infrastructure that government will not allow to fail (too big to fail?), it is clear that a high proportion of a project’s risk remains with the public sector. But such an acknowledgment undermines one of the major rationales for having PPPs in the first place, that they are good value for money despite apparently higher financing costs, because of their ability to transfer risk to private investors. A transfer that doesn't appear to have taken place.

Wellings goes on to explain that the UK experience thus far suggests that PPP schemes have failed to live up to their early promise. He offers several explanations for this:

Firstly, comparisons with public finance may understate the true cost of government funding. While it may be possible to borrow at low interest rates this is only because potential risks and losses have been offloaded on to taxpayers.

Secondly, a high proportion of recent PPPs have been plagued by high ‘transaction costs’. They have involved tortuous bidding processes and the creation of complex contractual agreements and regulatory frameworks, which have created additional costs and risks for the private-sector partners involved. Value for money has been reduced as a result.

Finally, the operation and outputs of PPP schemes have often been subject to substantial political and bureaucratic intervention. As seen with some of the public transport PPPs, a hostile relationship may develop between the counterparties. There can even be politically-motivated attempts to subvert the viability of projects. This makes it more difficult both to raise private finance and transfer risk. Investors are more likely demand a premium and contractual guarantees if they perceive political risks as high.

Wellings concludes by saying,

Accordingly, PPPs may not be a suitable funding model for some projects. The risks are particularly high in situations when government is unwilling to take a ‘hands-off’ approach. At the same time, if government will stand aside, perhaps after setting a loose regulatory framework, then depoliticisation through full-blooded privatisation may be the best option.

So overall, there are warnings from the UK experience of PPPs for countries like New Zealand who may be thinking of going down this route. PPPs do not always work and much thought must go into when and why they are used. Hopefully these warnings will be heeded. If they are there is no reason that PPPs could be a good model for some projects.

Thursday, 24 March 2011

1. We’ve entered a brave new world, a very different world in terms of macroeconomic policymaking.

2. In the age-old discussion of the relative roles of markets and the state, the pendulum has swung – at least a bit – toward the state.

3. There are many distortions relevant for macroeconomics, many more than we thought was the case earlier. We had largely ignored them, thinking they were the province of the microeconomist. As we integrate finance into macroeconomics, we’re discovering that distortions within finance are macro-relevant. Agency theory – about incentives and behaviour of entities or “agents” – is needed to explain how financial institutions work or do not work and how decisions are taken. Regulation and agency theory applied to regulators themselves is important. Behavioural economics and its cousin, behavioural finance, are central as well.

4. Macroeconomic policy has many targets and many instruments (that is, the tools we use or variables to implement policy). Many examples were discussed at the conference. Here are two:

* Monetary policy has to go beyond inflation stability, adding output and financial stability to the list of targets and adding macro-prudential measures to the list of instruments.
* Fiscal policy is more than just “G minus T” and an associated “multiplier” (the proportion or factor by which changes in government spending or taxes affect other parts of the economy). There are potentially dozens of instruments, each with their own dynamic effects that depend on the state of the economy and other policies. Bob Solow made the point that reducing discussions about fiscal policy to what is the right multiplier does not do service to the issue.

5. We may have many policy instruments, but we are not sure how to use them. In many cases, we are uncertain about what they are, how they should be used, and whether or not they will work. Again, many examples came up during the conference:

* We don’t quite know what liquidity is, so a liquidity ratio is one more step into the unknown.
* It was clear that some people believe capital controls work and some don’t.
* Paul Romer made the point that, if you adopt a set of financial regulations and keep them unchanged, the markets will find a way around, and ten years later, you’ll have a financial crisis.
* Michael Spence talked about the relative roles of self-regulation and regulation. Both are needed, but how we combine them is unclear.

6. While these instruments are potentially useful, their use raises a number of political economy issues.

* Some instruments are politically hard to use. Take cross-border flows. Putting in place a multilateral regulatory structure will be very difficult. Even at the domestic level, some macro-prudential tools work by targeting specific sectors, sets of individuals, or firms, and may lead to strong political backlash by those groups.
* Instruments can be misused. It was clear from the discussion that a number of people think that, while there may be an economic case for capital controls, governments could use them instead of choosing the right macroeconomic policies. Dani Rodrik argued for using industrial policy to increase the production of tradable goods without getting a current-account surplus. But in practice we know the limits of industrial policy, and they haven’t gone away.

7. Where do we go from here? In terms of research, the future is exciting. There are many topics on which we should work – namely macro issues with, as Joe Stiglitz suggests, the right micro foundations.

8. Things are harder on the policy front. Given we don’t quite know how to use the new tools and they can be misused, how should policymakers proceed? While we have a good sense of where we want to get to, a step-by-step approach is probably the way to go.

* Take inflation targeting. We can’t, from one day to the next, just give it up and have, say, a system with five targets and seven instruments. We don’t know how to do it and it would be unwise. We can, however, introduce gradually some macro-prudential tools, testing the water to see how they work.
* Increasing the role of Special Drawing Rights (SDRs) in the international monetary system is another example. If we go in that direction, we can move slowly from, say, creating a market in private SDR bonds to exploring the possibility for the IMF to issue SDR bonds to the private sector and then, if feasible, issuing them to mobilise funds in times of systemic crisis.

Pragmatism is of the essence. This was a general theme that came up, for example, in Andrew Sheng’s discussion of the adaptive Chinese growth model. We have to try things carefully and see how they work.

9. We have to keep our hopes in check. There are going to be new crises that we have not anticipated. And, despite our best efforts, we could have old-type crises again. That was a theme in Adair Turner’s discussion of credit cycles. Can we, using agency theory and the right regulations, get rid of credit cycles? Or is it basic human nature that, no matter what we do, they will come back in some form?

Don't know how many people are going to buy into these conclusions. I think of any number of economists who will not. For example, I don't see number 2 as particularly useful given that many of the problems we have faced over the last few years are due to state actions, so even more state involvement in the economy doesn't seem like a good response. And as for 7, I do worry as to what Stiglitz would see as the "right micro foundations" for macro.

Blanchard see these ideas as the beginning of a conversation. Well it could be a conversation which, as is normal for macro, produces much heat but very little light.

Economists and policy analysts opposed to price gouging laws have relied on the simple logic of price controls: if you cap price increases during an emergency, you discourage conservation of needed goods at exactly the time they are in high demand. Simultaneously, price caps discourage extraordinary supply efforts that would help bring goods in high demand into the affected area. In a classic case of unintended consequences, the law harms the very people whom lawmakers intend to help. The logic of supply and demand, so clear to economists, has had little effect on price gouging policies.

Diane Coyle, author of The Economics of Enough, talks with EconTalk host Russ Roberts about the future and the ideas in her book. Coyle argues that the financial crisis, the entitlement crisis, and climate change all reflect a failure to deal with the future appropriately. The conversation ranges across a wide range of issues including debt, the financial sector, and the demographic challenges of an aging population that is promised generous retirement and health benefits. Coyle argues for better measurement of the government budget and suggests ways that the political process might be made more effective.

Friday, 18 March 2011

A new working paper on "Simple models of a human-capital based firm: a reference point approach" is available below. The abstract reads,

We apply the reference point approach to contracts to the modelling of a human-capital based firm. First a model of firm scope is offered which argues that the organisation of a human-capital based firm depends on the “types” of human capital involved. Having a homogenous group of human capital leads to a different form than that of a firm which involves a heterogenous group of human capital. Second a simple model of a human-capital based firm is discussed. Three organisational forms are considered: an investor owned firm, a labour owned firm and a market transaction involving the use of an independent contractor. Results are given that show when each of these forms are optimal. The effects of a firm’s size and scope on organization are considered as is the question of Why are there conversions to investor ownership?

it makes consumers better off by lowering the prices they pay. Obvious really. Well not if you are a member of the Metro-DC Democratic Socialists of America's steering committee. Mark Perry at the Carpe Diem blog gives us this example of socialist economic thing. The quote comes from the editorial "Walmart's Arrival a Bad Deal for District," which appears in the current edition of the Dupont Current (p. 11), a neighborhood paper in Washington, D.C.

So we force consumers (workers) to pay higher prices than they otherwise would by stopping Walmart from introducing competition into the local area. And this helps workers how?

Thursday, 17 March 2011

Sue Kedgley’s maths are on a par with her economics. From an article at Stuff

Kedgley says the report failed to address the central issue of lack of competition in the domestic market.

"It doesn't tell us how the price of milk is set. Farmers say they receive less than 30 per cent of the price of milk, but it fails to shed any light on what makes up the other 60 per cent," she says.

Apart from the 100-30= 60 bit, Kedgley should realise that the price of milk in New Zealand will be the world price. For a tradable good like milk the market is the world market and the price is set by supply and demand conditions in the world market. So the correct question for Sue Kedgley to ask is, Are we paying the world price?

Spectators at February's Daytona 500 in Florida were handed green flags to wave in celebration of the news that the race's stock cars now use gasoline with 15 percent corn-based ethanol. It was the start of a seasonlong television marketing campaign to sell the merits of biofuel to Americans.

On the surface, the self-proclaimed "greening of NASCAR" is merely a transparent (and, one suspects, ill-fated) exercise in "greenwashing" for the sport. But the partnership between a beloved American pastime and the biofuel lobby also marks the latest attempt to sway public opinion in favor of a truly irresponsible policy.

and continues,

The United States spends about $6 billion a year on federal support for ethanol production through tax credits, tariffs, and other programs. Thanks to this financial assistance, one-sixth of the world's corn supply is burned in American cars. That is enough corn to feed 350 million people for an entire year.

Government support of rapid growth in biofuel production has contributed to disarray in food production. Indeed, as a result of official policy in the United States and Europe, including aggressive production targets, biofuel consumed more than 6.5 percent of global grain output and 8 percent of the world's vegetable oil in 2010, up from 2 percent of grain supplies and virtually no vegetable oil in 2004.

The results of all of this?

This year, after a particularly bad growing season, we see the results. Global food prices are the highest they have been since the United Nations started tracking them in 1990, pushed up largely by increases in the cost of corn. Despite the strides made recently against malnutrition, millions more people will be undernourished than would have been the case in the absence of official support for biofuels.

Why would anyone back such a policy?

Biofuels were initially championed by environmental campaigners as a silver bullet against global warming. They started to change their minds as a stream of research showed that biofuels from most food crops did not significantly reduce greenhouse gas emissions – and in many cases, caused forests to be destroyed to grow more food, creating more net carbon-dioxide emissions than fossil fuels.

Some green activists supported mandates for biofuel, hoping they would pave the way for next-generation ethanol, which would use non-food plants. That has not happened.

Today, it is difficult to find a single environmentalist who still backs the policy. Even former U.S. Vice President and Nobel laureate Al Gore—who once boasted of casting the deciding vote for ethanol support—calls the policy "a mistake." He now admits that he supported it because he "had a certain fondness for the [corn] farmers in the state of Iowa"—who, not coincidentally, were crucial to his 2000 presidential bid.

It is refreshing that Gore has now changed his view in line with the evidence. But there is a wider lesson. A chorus of voices from the left and right argue against continued government support for biofuel. The problem, as Gore has put it, is that "it's hard once such a program is put in place to deal with the lobbies that keep it going."

So rent seeking is the reason for the continuation of a very bad policy. Again politics trumps economics and millions of people-all of them suffering needlessly-pay the price.

Wednesday, 16 March 2011

[...] one point about nuclear power is beyond dispute: it always receives substantial subsidy from government. This consists of both direct payments toward the costs of building plants, along with insurance against full liability for accidents.

So a simple way to evaluate competing claims over safety is to eliminate both kinds of subsidy and find out whether the private sector really think nuclear power is profitable, if investors bear all construction and insurance costs.

Markets provide useful information, when they are allowed to work free of government interference. I do wonder just how many nuclear plant would be built in a truly free market.

MOGADISHU, March 13 (Reuters) - "Somali pirates said they would lower some of their ransom demands to get a faster turnover of ships they hijack in the Indian Ocean. Armed pirate gangs, who have made millions of dollars capturing ships as far south as the Seychelles and eastwards towards India, said they were holding too many vessels and needed a quicker handover to generate more income."

If you charge too much people just won't buy. I do wonder what the own price elasticity is for hijacked ships.

We exploit a unique combination of administrative sources and survey data to study the match between firms and managers. The data includes manager characteristics, such as risk aversion and talent; firm characteristics, such as ownership; detailed measures of managerial practices relative to incentives, dismissals and promotions; and measurable outcomes, for the firm and for the manager. A parsimonious model of matching and incentive provision generates an array of implications that can be tested with our data. Our contribution is twofold. We disentangle the role of risk-aversion and talent in determining how firms select and motivate managers. In particular, risk-averse managers are matched with firms that offer low-powered contracts. We also show that empirical findings linking governance, incentives, and performance that are typically observed in isolation, can instead be interpreted within a simple unified matching framework.

Such results make sense. If markets for managers are in anyway efficient then the matching of managers to firms is to be expected.

Robert Townsend of MIT and the Consortium on Financial Systems and Poverty talks with EconTalk host Russ Roberts about development and the role of financial institutions in growth. Drawing on his research, particularly his surveys of households in Thailand, Townsend argues that both informal networks and arrangements and formal financial institutions play important roles in dealing with risk. Along the way, he discusses the role of microfinance in poor countries and the potential for better financial arrangements to lead to higher growth and the accumulation of wealth.

Monday, 14 March 2011

In this audio from VoxEU.org Assaf Razin of Cornell University and Tel Aviv University talks to Romesh Vaitilingam about his book, ‘Migration and the Welfare State: Political-Economic Policy Formation’, which explores implications of the observation that open immigration cannot co-exist with a strong safety net, and policies to resolve intra- and intergenerational conflicts over immigration policies and the generosity of the welfare state.

Mark Perry at the Carpe Diem blog gives this nice example of a simple truth, that competition aids the consumer:

This was probably inevitable. Faced with all of the competition from Bolt Bus, Mega Bus, Chinatown Bus, DC2NY, Vamoose, etc. for cheap bus fares between cities like Washington, D.C. and NYC, the long-time industry leader Greyhound had to match the "predatory" fares and "cutthroat competition" of its new, upstart rivals.

As the graphic above shows, Greyhound is now offering $15 fares between DC and NYC on the Uncommon Transport website, which is less than 50% of the "standard fare" of $35 listed on Greyhound's regular website for DC to NYC. And for a route of approximately the same distance - DC to Charleston, WV (250 miles) - but without the intense competition of the 225-mile DC-NYC route, the one-way Greyhound fare is $109.

The lesson to be taken from this is that competition - even "cutthroat" competition - is the consumer's best friend, and often the best regulator of a market. This latter point often seems to be overlooked in policy circles. Introducing competition regulates a market better than any regulator can.

Sunday, 13 March 2011

The previous posting on More on the productivity paradox highlighted the problem of the measurement of the modern, new or "knowledge economy". Much time and effort is expended by many national and international organisations in an attempt to measure the economy or economies of the world. While the measuring of the ‘standard’ economy is funny enough, when we move to the measurement of the ‘knowledge economy’ measurement goes from the mildly humorous to the outright hilarious. Most attempts to measure, or even define, the information or knowledge economy border on the farcical: the movie version should be called, "Mr Bean(counter) Measures the Economy".

There are substantial challenges to be overcome in any attempt to measure the knowledge economy. These are at both the theoretical and the method level. A more consistent set of definitions are required as are more robust measures that are derived from theory rather than from whatever data is currently or conveniently available. In order to identify the size and composition of the knowledge based economy one inevitably faces the issue of quantifying its extent and composition. Economists and national statistical organisations are naturally drawn to the workhorse of the ‘System of National Accounts’ as a source of such data. Introduced during World War II as a measure of wartime production capacity, the change in (real) Gross Domestic Product (GDP) has become widely used as a measure of economic growth. However, GDP has significant difficulties in interpretation and usage (especially as a measure of wellbeing) which has led to the development of both ‘satellite accounts’ - additions to the original system to handle issues such as the ‘tourism sector’; ‘transitional economies’ and the ‘not-for-profit sector’ - and alternative measures, for example, the Human Development Indicator and Gross National Happiness. GDP is simply a gross tally of products and services bought and sold, with no distinctions between transactions that add to wellbeing, and those that diminish it. It assumes that every monetary transaction adds to wellbeing, by definition. Organisations like the Australian Bureau of Statistics and the OECD have adopted certain implicit/explicit definitions, typically of the Information Economy-type, and mapped these ideas into a strong emphasis on impacts and consequences of ICTs. The website (http://www.oecd.org/sti/information-economy) for the OECD’s Information Economy Unit states that it:

“[...] examines the economic and social implications of the development, diffusion and use of ICTs, the Internet and e-business. It analyses ICT policy frameworks shaping economic growth productivity, employment and business performance. In particular, the Working Party on the Information Economy (WPIE) focuses on digital content, ICT diffusion to business, global value chains, ICT-enabled off shoring, ICT skills and employment and the publication of the OECD Information Technology Outlook.”

Furthermore, the OECD’s Working Party on Indicators for the Information Society has

“[...] agreed on a number of standards for measuring ICT. They cover the definition of industries producing ICT goods and services (the “ICT sector”), a classification for ICT goods, the definitions of electronic commerce and Internet transactions, and model questionnaires and methodologies for measuring ICT use and e-commerce by businesses, households and individuals. All the standards have been brought together in the 2005 publication, Guide to Measuring the Information Society [ . . . ]” (http://www.oecd.org/document/22/0,3343,en_2649_201185_34508886_1_1_1_1,00.html).

The whole emphasis is on ICTs. For example, the OECD’s “Guide to Measuring the Information Society” has chapter headings that show that their major concern is with ICTs. Chapter 2 covers ICT products; Chapter 3 deals with ICT infrastructure; Chapter 4 concerns ICT supply; Chapter 5 looks at ICT demand by businesses; while Chapter 6 covers ICT demand by households and individuals.

As will be shown below several authors have discussed the requirements for, and problems with, the measurement of the knowledge/information economy. As noted above most of the data on which the measures of the knowledge economy are based comes from the national accounts of the various countries involved. This does raise the question as to whether or not the said accounts are suitably designed for this purpose. There are a number of authors who suggest that in fact the national accounts are not the appropriate vehicle for this task. Peter Howitt argues that:

“[...] the theoretical foundation on which national income accounting is based is one in which knowledge is fixed and common, where only prices and quantities of commodities need to be measured. Likewise, we have no generally accepted empirical measures of such key theoretical concepts as the stock of technological knowledge, human capital, the resource cost of knowledge acquisition, the rate of innovation or the rate of obsolescence of old knowledge.” (Howitt 1996: 10).

Howitt goes on to make the case that because we can not measure correctly the input to and the output of, the creation and use of knowledge, our traditional measure of GDP and productivity give a misleading picture of the state of the economy. Howitt further claims that the failure to develop a separate investment account for knowledge, in much the same manner as we do for physical capital, results in much of the economy’s output being missed by the national income accounts.

In Carter (1996) six problems in measuring the knowledge economy are identified:

The properties of knowledge itself make measuring it difficult,

Qualitative changes in conventional goods: the knowledge component of a good or service can change making it difficult to evaluate their ‘levels of output’ over time,

Changing boundaries of producing units: for firms within a knowledge economy, the boundaries between firms and markets are becoming harder to distinguish,

Changing externalities and the externalities of change: spillovers are increasingly important in an knowledge economy

Distinguishing ‘meta-investments’ from the current account: some investments are general purpose investments in the sense that they allow all employees to be more efficient

Creative destruction and the ‘useful life’ of capital: knowledge can become obsolete very quickly and as it does so the value of the old stock drops to zero.

Carter argues that these issues result in it being problematic to measure knowledge at the level of the individual firm. This results in it being difficult to measure knowledge at the national level as well since the individual firms’ accounts are the basis for the aggregate statistics and thus any inaccuracies in the firms’ accounts will compromise the national accounts.

Haltiwanger and Jarmin (2000) examine the data requirements for the better measurement of the information economy. They point out that changes are needed in the statistical accounts which countries use if we are to deal with the information/knowledge economy. They begin by noting that improved measurement of many “traditional” items in the national accounts is crucial if we are to understand fully Information Technology’s (IT’s) impact on the economy. It is only by relating changes in traditional measures such as productivity and wages to the quality and use of IT that a comprehensive assessment of IT’s economic impact can be made. For them, three main areas related to the information economy require attention:

The investigation of the impact of IT on key indicators of aggregate activity, such as productivity and living standards,

The impact of IT on labour markets and income distribution and

The impact of IT on firm and on industry structures.

Haltiwanger and Jarmin outline five areas where good data are needed:

Measures of the IT infrastructure,

Measures of e-commerce,

Measures of firm and industry organisation,

Demographic and labour market characteristics of individuals using IT, and

Price behaviour.

In Moulton (2000) the question is asked as to what improvements we can make to the measurement of the information economy. In Moulton’s view additional effort is needed on price indices and better concepts and measures of output are needed for financial and insurance services and other “hard-to-measure” services. Just as serious are the problems of measuring changes in real output and prices of the industries that intensively use computer services. In some cases output, even if defined, is not directly priced and sold but takes the form of implicit services which at best have to be indirectly measured and valued. How to do so is not obvious. In the information economy, additional problems arise. The provision of information is a service which in some situations is provided at little or no cost via media such as the web. Thus on the web there may be less of a connection between information provision and business sales. The dividing line between goods and services becomes fuzzier in the case of e-commerce. When Internet prices differ from those of brick-and-mortar stores do we need different price indices for the different outlets? Also the information economy may affect the growth of Business-to-Consumer sales, new business formation and in cross-border trade. Standard government surveys may not fully capture these phenomena. Meanwhile the availability of IT hardware and software results in the variety and nature of products being provided changing rapidly. Moulton also argues that the measures of the capital stock used need to be strengthened, especially for high-tech equipment. He notes that one issue with measuring the effects of IT on the economy is that IT enters the production process often in the form of capital equipment. Much of the data entering inventory and cost calculations are rather meagre and needs to be expanded to improve capital stock estimates. Yet another issue with the capital stock measure is that a number of the components of capital are not completely captured by current methods, an obvious example being intellectual property. Also research and development and other intellectual property should be treated as capital investment though they currently are not. In addition to all this Moulton argues that the increased importance of electronic commerce means that the economic surveys used to capture its effects need to be expanded and updated.

In Peter Howitt’s view there are four main measurement problems for the knowledge economy:

The “knowledge-input problem”. That is, the resources devoted to the creation of knowledge are underestimated by standard measures.

The “knowledge-investment problem”. The output of knowledge resulting from formal and informal R&D activities is typically not measured.

The “obsolescence problem”. No account is taken of the depreciation of the stock of knowledge (and physical capital) due to the creation of new knowledge.

To deal with these problems Howitt makes a call for better data. But it’s not clear that better data alone is the answer, to both Howitt’s problems and the other issues outlined here. Without a better theory of what the “knowledge economy” is and the use of this theory to guide changes to the whole national accounting framework, it is far from obvious that much improvement can be expected in the current situation.

One simple, theoretical, question is, To which industry or industries and/or sector or sectors of the economy can we tie knowledge/information production? When considering this question several problems arise. One is that the “technology” of information creation, transmission and communication pervades all human activities so cannot fit easily into the national accounts categories. It is language, art, shared thought, and so on. It is not just production of a given quantifiable commodity. Another issue is that because ICT exists along several different quantitative and qualitative dimensions production can not be added up. In addition if much of the knowledge in society is tacit, known only to individuals, then it may not be possible to measure in any meaningful way. Also if knowledge is embedded in an organisation via organisational routines then again it may not be measurable. Organisational routines may allow the knowledge of individual agents to be efficiently aggregated, much like markets aggregate information, even though no one person has a detailed understanding of the entire operation. In this sense, the organisation “possesses” knowledge which may not exist at the level of the individual member of the organisation. Indeed if, as Hayek can be interpreted as saying, much of the individual knowledge used by the organisation is tacit, it may not even be possible for one person to obtain the knowledge embodied in a large corporation.

As noted above Carter (1996) emphasises that it is problematic to measure knowledge at the national level in part because it is difficult to measure knowledge at the level of the individual firm. Part of the reason for this is that none of the orthodox theories of the firm offer us a theory of the “knowledge firm” which is needed to to guide our measurement.

Thus many of the measurement problems of the "knowledge economy" are rooted in the fact that we don't have a good theory of the "knowledge economy" or the "knowledge firm". Without such theories calls for better data are wasted, they miss the point. "Better" data collection alone is not going to improve the measurement of the "knowledge economy".