Friday, 28 September 2012

When thinking about alumni and university governance I went back and read the relevant sections of Henry Hansmann's "The Ownership of Enterprise". Hansmann make a couple of interesting observations about university alumni. The first is that (partial) control by alumni makes sense if there is a ranking of universities based on perceived quality of students. Top tier universities would have market power and could charge a very high fee to gain entry into them. A profit maximising university would do this, but a university, at least in part, controlled by its alumni is less likely to do so as the alumni do not want to charge themselves a monopoly price. The alumni may have children or grandchildren going to the university and would want to keep cost of attending the university 'low' - or at least lower than it otherwise would be. Such an advantage of alumni control is less of an issue here is New Zealand given the amount of government control over universities and their funding.

The second point made by Hansmann relates to the use of "implicit loans" from the university to their students. The markets for loans for the acquisition of human capital are inadequate for obvious reasons. One way around this is for the universities to provide a crude substitute for these loans. Universities supply education to many students at below cost in return for an implicit commitment on the part of the students that they "repay" their loan through donations during their life after university. One result of the government supplying a system of loans, grants or loan guarantees is that there is less, or no, implicit loans to be repaid. This may one reason for the low level of alumni involvement in universities in New Zealand.

The cost to the university of alumni involvement for either of the reasons above is that the alumni will want a share in control of the university. Thus if the universities wish to gain financially even more from their alumni, as they seem to want to do, then they will have to give a larger amount of control over to the alumni.

As noted in the previous post on this subject the alumni can have the incentives and knowledge to make them a sensible class of patrons to give control over to. Not the universities or the government are likely to see it that way!

Wednesday, 26 September 2012

In this audio from VoxEU.org Diane Coyle talks to Viv Davies about her recent edited volume on 'What's the use of economics?' They discuss what economists need to bring to their jobs and the way in which education in universities could be improved to fit economic graduates better for the real world.

Do newspapers provide a public good that wouldn't exist in their absence? It's that question that should determine our reaction to David Leigh's proposal for a £2 per month levy on broadband to subsidize newspapers.

There is some evidence to support him. When the Cincinnati Post closed in 2007, voter turnout fell and incumbent councillors were more likely to be re-elected, suggesting that the decline of even small newspapers worsens democracy.

Unless there is further information about causation here, isn't thing just a case of correction rather than causation? Could the relationship not be entirely spurious?

As to a justified subsidy Chris points out that it is his view that

[...] the government should subsidize left-handed cat-loving guitar-playing economists.

Clearly this is rubbish, there is absolutely no justification for such a subsidy. A correct and proper subsidy would go to right-handed cat-loving nonguitar-playing economists!

What explains the dominance of the US in elite higher education? Shailendra Mehta offers a novel explanation: the role of alumni. Graduates of US colleges and universities tend to identify strongly with their institutions and care deeply about their school’s reputation and ranking. Only in the US do alumni play such a strong role, not only in financial support (often connected with athletics), but governance.

[N]o group cares more about a university’s prestige than its alumni, who gain or lose esteem as their alma mater’s ranking rises or falls.

Indeed, alumni have the most incentive to donate generously, and to manage the university effectively. Given their intimate knowledge of the university, alumni are also the most effective leaders. Through alumni networks, board members can acquire information quickly and act upon it without delay.

All great universities are nonprofit organizations, created to administer higher education, which benefits society as a whole. But US universities found a way to integrate competition’s benefits into the European concept of nonprofit, or so-called eleemosynary, corporations. The lack of profit does not diminish an alumni-dominated board’s incentive to compete for prestige by, for example, hiring distinguished faculty, accepting meritorious students, and striving for athletic or artistic achievement.

If the Mehta idea is right then should we strength the role that the alumni plays in the governance of New Zealand's universities? Alumni have advantages as major players in university governance. 1) they have an interest in the success of the university and in maintaining its reputation. If the uni's reputation falls so does the value of being a graduate from that uni. 2) they have knowledge about the university and its workings. In areas where expertise is a major component of producing the final product, an inherent knowledge of the area is a positive in terms of getting better organisational outcomes. Alumni can have such knowledge of their institutions.

Of course another group with much the same two attributes are the current academic staff of the university. And what we see in terms of governance is involvement of both groups but should the alumni have greater say?

Tuesday, 25 September 2012

On the one hand, as Brian points out, the decision is easy if you subscribe to the efficiency-of-use argument. The three big stadiums in Auckland (Eden Park, North Harbour Stadium and Mt Smart Stadium) all depend on local government to a greater or lesser degree. Indeed, the issues paper released by Regional Facilities Auckland (RFA) in June (linked here) indicates that Eden Park breaks even each year and has a large debt to service of $55m post-Rugby World Cup, Mt Smart is facing an upgrade bill of some $60m and requires local government funding each year, and North Harbour Stadium is very much dependent on local government funding to stay viable. There appears to be an argument, on the surface, that there are potential efficiencies to be gained by rationalising their use (the 'collaborative strategies' option presented by RFA).

I guess my first question about this is , What does break even mean for Eden Park? What get countered in such a calculation and what doesn't? How much local government support does the park get? Or, if it was a private business, would it still be in business?

My other question comes from the EconTalk interview with Roger Noll of Stanford University. Noll noted that

[ ... ] an arena that has multiple uses, say, it's going to have a basketball team and/or a hockey team, has other potential uses, like concerts and tractor-pulls, all kinds of stuff. And so a well-managed arena can be occupied 250-300 nights a year. And they can break even.

So my question would be, If around 300 nights of use a year is the break even point, can any stadium, even a multi-use one, be used 300 times a year in Auckland?

Robert Frank of Cornell University and EconTalk host Russ Roberts debate the merits of a large increase of infrastructure spending. In the summer of 2012, Frank and Roberts were interviewed by Alex Blumberg of NPR's Planet Money. That interview was trimmed to ten minutes for a Planet Money podcast. This is the entire conversation. Frank argues that a trillion increase in infrastructure spending, where the projects are decided by a bipartisan commission, would put people back to work and repair a near-failing system at a time when it is cheap to repair it and cheap to fund those repairs. Roberts disagrees with virtually every piece of Frank's argument. This lively conversation covers fundamental disagreements over fiscal policy, the proper role for government, and the political process.

Sunday, 23 September 2012

[...] regardless it backs my view that the impact of smaller class sizes is minimal, unless it is a massive difference. In other words a size of 15 will make a big difference compared to 30, but a class of 25 compared to a class of 27 will not.

The following few pages come from the book Mostly Harmless Econometrics: An Empiricist's Companion by Joshua D. Angrist and Jorn-Steffen Pischke, Princeton: Princeton University Press, 2009 and show that research does in fact indicate that class size does matter for educational outcomes. It also makes clear just how problematic getting good results can be. (Enlarge to make more readable.)

Stephen Ziliak and D. N. McCloskey have sharply criticized the prevailing use of significance tests. Their work has, in turn, come under vigorous attack. The vehemence of the debate may induce readers to wrongly dismiss it as a “he said-she said” debate, or else to take sides in an unbending way that does not do justice to valid points raised by the other side. This paper aims at a more balanced reading. While Ziliak and McCloskey claim that a substantial majority of economists who use significance tests confuse statistical with substantive significance, or commit the logical error of the transposed conditional, I argue that such errors are much less frequent than they claim, though still much too pervasive. They also argue that since significance tests focus on the existence of an effect rather than on its size, the tests do not answer scientific questions. I respond with counter-examples. Ziliak and McCloskey also complain that significance tests ignore loss functions. I argue that loss functions should be introduced only at a later stage. Ziliak and McCloskey are correct, however, that confidence intervals deserve much more emphasis. The most valuable message of their work is that significance tests should be treated less mechanically.

Econometricians have been claiming proudly since World War II that significance testing is the empirical side of economics. In fact today most young economists think that the word “empirical” simply means “collect enough data to do a significance test”. Tjalling Koopmans’s influential book of 1957, Three Essays on the State of Economic Science, solidified the claim. A century of evidence after Student’s t-test points strongly to the opposite conclusion. Against conventional econometrics we argue that statistical significance is neither necessary nor sufficient for proving commercial, human, or scientific importance. A recent comment by Thomas Mayer, though in parts insightful, does nothing to alter conclusions about the logic and evidence which we and others have assembled against significance testing. Let’s bury it, and get on to empirical work that actually changes minds.

It can be argued, with some justification I would say, that the high point of neoclassical economics is Gerard Debreu's "Theory of Value". I have argued before that the firm really doesn't exist in the standard neoclassical model. As Nicoli Foss has noted

"With perfect and costless contracting, it is hard to see room for anything resembling firms (even one-person firms), since consumers could contract directly with owners of factor services and wouldn’t need the services of the intermediaries known as firms".

and as Foss, Lando and Thomsen explain,

"[t]he pure analysis of the market institution leaves almost no room for the firm (Debreu 1959)".

Debreu obviously has a supply side to his model, there are "producers", just not firms,

Today I cane across a pdf version of Debreu's book and did a searched it for the word "firm". I found only two occurances in the book.

"In so far as Yj represents technological knowledge, it is clear that two production plans separately possible are jointly possible. Alternatively the jth producer can be interpreted as an industry rather than as a firm; then the additivity assumption means that there is free entry for firms into that industry, i.e., no institutional or other barrier to entry."

Interestingly Debreu says that a "producer" can be an industry. The question this raises with me is, Is the non-discussion of firms in the book accidental? Or did Debreu fully reaslise that firms, by any serious definition of the word, didn't exist in the neoclassical model and thus he didn't refer to them. He was content to deal with the abstraction that is a "producer"?

Pradumna B. Rana has an article on Misdiagnosing the Eurozone crisis: Perspectives from Asia available at VoxEU.org. He argues that the Asian financial crisis of the late 1990s shows what can happen when economists misdiagnose a crisis. His article explains that the Eurozone crisis may have been made worse by over-simplifying it as a debt crisis. Rana suggests that the large institutional changes now afoot – the shift from Eurozone I to Eurozone II – are finally addressing the root causes, but they may be too little too late.

Rana also looks at the root causes of the crisis,

The EZ [Eurozone] is experiencing multiple and often overlapping crises. Greece, the southern EZ, and Ireland are experiencing a fiscal crisis mainly because of overspending by the public sector in the form of unsustainable wages and pensions. Ireland and now other countries are facing a banking crisis because of public sector guarantees to banks which financed the property bubble. Most countries in the region are also experiencing a competitiveness crisis vis-à-vis Germany, which has successfully enhanced its economic efficiency through structural reforms. But what were the root causes of the crises and what is being done to address them?

The root causes of the EZ crisis were the flaws in the design of the monetary union (Kirkegaard 2011, Bergsten 2012). The EMU, launched in 1999, comprised the euro (the single currency) and the European Central Bank (ECB) for a common monetary policy. It did not contain a fiscal union, and other institutional mechanisms for coordinating structural policies. Both the Werner Report of the 1970s (European Commission 1970) and the Delors Report of the 1980s (European Council 1989), which served as the blueprint, had developed a three-stage roadmap comprising closer economic coordination among members, binding constraints on member states’ national budgets, and a single currency.

But in their haste and eagerness to accomplish a full and irrevocable European unity, the ‘founding members’ had felt that the two convergence criteria enshrined in the Maastricht Treaty – a 3% limit on annual fiscal deficit and a 60% limit on gross public debt to GDP ratio – would be adequate for the purpose. In practice, these thresholds were neither binding nor fixed. The Eurozone was, therefore, launched as an experiment between a set of countries that were quite diverse and far less integrated than required by the optimum currency theory of Professor Robert Mundell (1961). It was hoped that a monetary union would lead to an economic union. But this did not happen.

The institutional flaws have now been identified and are being fixed. A key design flaw in the Eurozone was the absence of the lender of last resort in government bond markets (De Grauwe 2011, Wyplosz 2011). When a country issues sovereign bonds in its own currency there is an implicit guarantee from the central bank that cash will always be available to pay out the bondholders. The absence of such a guarantee in a monetary union – where bonds are issued in a currency over which individual countries have little control – makes the sovereign bond markets prone to liquidity crisis and contagion, very much like banking systems in the absence of lenders of last resorts.

Given the problems, e.g. moral hazard issues, that a lender of last resort can cause in normal banking markets, one has to ask if similar problems would not occur with government bond markets?

We think not. The problems underlying the European crisis were institutional. What we are seeing now are mostly short-term fixes, not true solutions to these institutional problems.

The roots of the crisis lie in the difficulty of operating a currency union without centralized fiscal authority. But that’s not all. The problem was made worse by implicit guarantees to markets concerning the sovereign debt of all euro-zone countries, which enabled Greece, Italy, Portugal and Spain to borrow at sharply lower rates than before. This then enabled the dysfunctional political economy in Greece, Italy and Portugal (and to some degree in Spain) to persist with borrowed money and transfers.

Friday, 21 September 2012

Scoop reports that the the Employers and Manufacturers Association (EMA) chief executive Kim Campbell has said

"While many economists advise us that our dollar is over-valued, some of this is due in no small part to influences outside our control, eg, debt problems in the Euro-zone and economic slowdown in the US."

There is nothing New Zealand can do about the crisis in Europe or the state of the current account deficit in the U.S. But these things will make the New Zealand dollar look relatively good.

But there are still somethings we can do

"The things that are in our control include re-examining how central and local government can avoid adding to inflationary pressures," says Mr Campbell.
Examples are:
* Freeing up the supply of land at local government level to make building a house more affordable.
* Ensuring tax policy takes account of its impact on monetary policy. For example, any new government spending should be assessed for its impact, both short-term and longer term, on inflation.
* Introducing a Regulatory Responsibility Act to improve the quality of regulation.
* Reducing government and private sector debt where appropriate (high debt drives up interest rates as lenders demand a risk premium) - we need to stay the course."

The state of the exchange rate is at best a signal that something is wrong within the economy but it isn't the cause of those problems, it is the result. Tinkering with the exchange rate won't fix whatever is wrong with the economy it would just mask the real problems.

Thursday, 20 September 2012

Asks Diane Coyle at the VoxEU.org website. I think the most obvious answer would be its useful 'cause its fun! Generating utility sounds like a useful thing.

To me there are a few odds things in what Coyle has written. For example,

[ ... ] in a 2012 survey of economists in the Government Economic Service – the single biggest employer of economists in the UK – they described the two main areas of their work as the production of briefing material and the preparation of policy advice.

From what I can tell, here in New Zealand government departments want applied statisticians who can tell them that their prejudices are right and have a number to prove it!

Stephen King, Group Chief Economist at HSBC, put it: “Young economists arrive in the financial world with little or no knowledge of how the financial system operates.

This sounds a lot like saying "I've employed a physicist and he doesn't know anything about political philosophy". Would you really be surprised by this? I don't know but if you want people who are trained in finance shouldn't you employ people with degrees in finance and not in economics?

There was a strong consensus on the need to demote the role of theory and promote empiricism.

How much more theory hating can things get? The amount of theory most econ curses have these day is so small I'm not sure how it could be reduced any further without turning it into an applied stats course. I would argue the other way we need more theory, and less empirical stuff.

Wednesday, 19 September 2012

The subject of foreign and forced labour exploitation by the Third Reich is not one of meagre proportions. More than 14 million forced labourers passed through the Reich from 1939 to 1945, of whom 4.6 million had been prisoners of war (POWs). Interestingly there is little work on the economic effect of POWs.

POWs were exploited through maximising employment, minimising payments, and the use of discrimination and abuse. My recent study complements the work of Herbert (1997) and Spörer (2001) on the employment of POWs in Nazi Germany (Custodis 2012). It provides more consistent employment data from a wide array of primary and secondary sources. Data from the German Ministry of Labour used by other authors is re-analysed for a first quantification of the POW’s economic contribution to Nazi Germany’s war economy. My results revise POW employment figures upwards and they also show that the exploitation of the POW workforce was more significant than previously thought.

Foreign and POW labour made up a significant proportion of the German workforce. At peak in September 1944, 7.5 million, or approximately one fifth of the aggregate German workforce, were foreign workers or POWs; in the munitions industry and in agriculture in 1944, at peak one third and one fifth were POWs or foreign workers respectively. The foreign workforce was largely transferred to Nazi Germany from occupied territories in Europe and contained a vast array of different groups, namely voluntary and coerced workers from Western Europe, forced labour from Eastern Europe, and concentration camp inmates.

The POW workforce itself was similarly heterogeneous. The initial POW workforce in 1939 consisted mostly of Polish POWs, but by 1942, the great majority was Soviet or French with peak employment of approximately one million respectively. The Italian armistice in September 1943 produced an additional labour source of half a million Italian POWs. They were reclassified as Italian Military Internees (IMIs) to circumvent the 1929 Geneva Convention which forbade POW employment which was excessive, dangerous or directly linked to the war effort.

Four main POW groups can be distinguished based on Germany’s adherence to the Geneva Convention. British and American POWs received treatment mostly complying with the Geneva Convention. French and Belgians could suffer from arduous conditions but on average were treated fairly well; Yugoslavs and IMIs only encountered partial compliance; Poles and Soviets possessed no legal protection whatsoever. Initially, most POWs worked in agriculture, but by 1944, they were increasingly employed in coal mining and the munitions industries. Also, the ideological discrimination largely determined work allocation. While for instance half of all French POWs worked in agriculture in 1944, three quarters of the Soviet POWs worked in industry.

There are rather obvious moral hazard problems with forced labour, productivity isn't likely to be high. Custodis continues,

No consistent work has so far been done on the field of POW productivity. Pfahlmann (1968: 233-234) claims that POWs on average were 80% as productive as German civilians, but he does not account for productivity varying by nationality and over time. Skilled French POWs in agriculture for instance were allegedly 80% to 90% as productive as German civilians (Spörer, 2001: 186), but by the end of 1943 their share in the POW workforce was declining rapidly as that of the Soviet POWs began to rise. The Soviets had died in masses until 1942 when Hitler began to tap them as a labour resource such that by January 1945 they represented almost half of the POW workforce. The relative Soviet POW productivity is contested but appears to have ranged from 45% to 60% of a German civilian (Spörer 2001:186; Streit 1978:215). I produce new productivity calculations accounting for these different productivity estimates and factoring in the changing workforce shares over time and the sharp productivity differences by nationality. My results illustrate that the POWs were not as productive as Pfahlmann had claimed. They were on average 50% to 70% as productive as German civilian workers and productivity decreased over time from 70% in 1942 to 55% in 1945. The decrease is mainly driven by the rising share of Soviet POWs with lower productivity figures, but also frequent reassignments, starvation, mistreatment, and bombings reduced productivity, in particular towards the end of the war.

As to the number of POWs employed Custodis revises previous employment figures substantially upwards.

Previous studies show two major deficiencies in this regard: First, they omitted existing government statistics and second they neglected POWs who had been counted as civilians. Herbert (1997: 298) claims that nominal peak POW employment in autumn 1944 stood at 1.91 million, but his individual statistics only add up to 1.73 million as he omits Yugoslav and British POW labourers. Also, statistics by Kroener, Mueller and Umbreit (1999: 212) show that the nominal peak occurred in fact in January 1945 with 2.2 million workers. However, the correction for the omission of ‘hidden’ POW workers shows that actual POW employment was significantly higher. Several hundred thousand French and Polish POWs and IMIs were ‘released’ into civilian status between 1942 and 1945 to increase output and productivity as civilian working conditions were not bound by the Geneva Convention. The civilian releases raise POW employment from 1.9 to 2.3 million POWs workers in autumn 1944 and from 2.2 million to a staggering 3 million in January 1945.

Putting the labour force and productivity measures together get an extimate of the economic contribution of POW's to the Nazi economy.

The POWs at peak made up 5% of the aggregate German workforce. The employment figures can be used to attain an output proxy by assuming that each POW worked a constant amount of days per year with more or less constant working hours per day. Obviously this assumption bears some weaknesses as the majority of POWs were overworked and as working hours and working days per week were far from constant, especially towards the end of the war with Allied bombardment and forced evacuation marches. Still, using this assumption I am able to obtain a minimum base of days worked and avoid upward bias. The resulting sensitivity analysis under different assumptions such as the inclusion or exclusion of the civilian releases and using different sub-datasets yields a range of aggregate man-days worked of 1.7 to 2.3 billion, with the lower bound of 1.8 billion being the most credible result.

And the monetary contribution of POW labour?

Equipped with this output benchmark, the previously attained relative productivity figures and skilled and unskilled civilian wages I then arrive at a range for the monetary contribution of POW labour. The POWs contributed between 1% and 1.5% to GNP every year from 1940 to 1944 and almost 2% per year for three consecutive years from 1942 up until 1944. The contribution of the 7 million foreign and POW workers overall was even greater. Not only was every tenth worker in the Reich foreign or a POW between 1939 and 1944, but the foreign and POW workforce also at peak produced 6% and 7.5% of GNP in 1943 and 1944 respectively and accounted for an average contribution of 4% from 1939 to 1944.

Tuesday, 18 September 2012

Given the debate that I have been having this morning it seems my view of economics is out of step with all other views of the subject. I see it as a social science with close relationship with moral and political philosophy, political science, psychology etc. But this it seems is a minority view. A minority of one in fact! Everybody else sees the subject as simply being part of business, it sole role is to train better businessmen. (whatever "better" may mean in this context)

Of course I would ask, What use is general equilibrium theory or the theory of the firm or time series econometrics or experimental economics or ..... to a businessman? Does your local dairy owner really need to know about Granger causality or the second fundamental theorem of welfare economics to run a successful business? I would say not, but its seems I'm wrong. Others believe these to be vitally important for the success of the enterprise.

Now I would argue that a basic knowledge of economics is valuable and useful for all people be they businessmen, school teachers, truck drivers or whatever. What I don't see is why businessmen need more economic knowledge than other people.

Oh well, like on so many things, it looks like I'm wrong yet again :-(

What would Adam "Moral Philosopher" Smith say?

Update: I should have remembered before that I was not the first person to think economics is of little use to businessmen. A. C. Pigou got there well before me, in 1922 in fact:

"[ . . . ] it is not the business of economists to teach woollen manufacturers to make and sell wool, or brewers how to make and sell beer, or any other business men how to do their job. If that was what we were out for, we should, I imagine, immediately quit our desks and get somebody - doubtless at a heavy premium, for we should be thoroughly inefficient - to take us into his woollen mill or his brewery".

Lionel Robbins followed up in 1935 by arguing that running a business is beyond the competence of economists,

"[t]he technical arts of production are simply to be grouped among the given factors influencing the relative scarcity of different economic goods. The technique of cotton manufacture [ . . . ] is no part of the subject-matter of Economics [ . . . ]".

Ronald Coase notes that when he went to the LSE to study for the Bachelor of Commerce, specialising in the Industry Group,

"It will have been noticed that during my two years at LSE I studied a great variety of subjects, devoting therefore very little time to each and inevitably doing no systematic reading. I took no course in economics [...]"

Obviously in Coase's day economics was not seen as vital to business.

Update 2: Ulrich Witt notes that early in the history of economics in Germany economic theory was separated from business economics,

"There, economic theory (Volkswirtschaftslehre) and business economics (Betriebswirtschaftslehre) were institutionally segregated as early as at the turn of the century to a degree still unknown today in the Anglo Saxon world. As Lachmann once conjectured, Austrian writers therefore considered the organizational form of entrepreneurial activities to be a topic best left to their business economics fellows."

Paul Tough, author of How Children Succeed, talks with EconTalk host Russ Roberts about why children succeed and fail in school and beyond school. He argues that conscientiousness--a mixture of self-control and determination--can be a more important measure of academic and professional success than cognitive ability. He also discusses innovative techniques that schools, individuals, and non-profits are using to inspire young people in distressed neighborhoods. The conversation closes with the implications for public policy in fighting poverty.

This is the title of a new column at VoxEU.org by Marco Annunziata. Annunziata argues that the U.S. Fed's announcement of a third round of quantitative easing is unlikely to work. Investment and hiring are held back by huge uncertainty over the long-term outlook and the stimulus provides a monetary bridge over the election gap but little more.

The Fed has recently launched an extremely aggressive stance:

As signalled in the last released FOMC minutes and in Bernanke’s Jackson Hole speech, the Fed recently launched QE3, its third attempt to boost growth and employment via asset purchases. It will buy US$40 billion worth of mortgage-backed securities (MBS) a month.

It confirmed that Operation Twist (extension of maturities held in its portfolio) will be extended though the end of this year.

In addition, the Fed has extended its forward guidance, indicating it now expects interest rates to remain exceptionally low at least through mid-2015.

Even more importantly, the Fed now “expects that a highly accommodative stance of monetary policy will remain appropriate for a considerable time after the economic recovery strengthens”.

Finally, the statement opens the door to continuing MBS purchases, launching new purchases of other assets (such as US Treasuries), and deploying “other policy tools” until they can achieve a substantial improvement in the labour market outlook in a context of price stability.

Annunziats writes,

The Fed sets its sights straight on the labour market, and stays true to its mantra that there has been no change in the US’ natural rate of unemployment. The statement that the Fed “expects that a highly accommodative stance of monetary policy will remain appropriate for a considerable time after the economic recovery strengthens” sounds at first like an oxymoron. Once the recovery strengthens, there should be no need to maintain an extraordinary degree of monetary accommodation. But the Fed projects that in 2014, with GDP growth running a full percentage point above potential, unemployment will be barely lower than it is now, and only in 2015 it will get closer to 7%.

and continues,

In a recent Vox column Calvo et al. (2012) make a strong case that a persistently higher rate of unemployment might reflect the nature of the financial crisis rather than a higher natural rate of unemployment, and that monetary policy can therefore help bring it back to pre-crisis levels. Their prescription, however, is that the Fed should act in coordination with the Treasury to remove toxic assets and bolster the stock of safe assets; a strategy with a strong structural component, which seems to me absent from the Fed’s current approach.

Investment and hiring are held back by uncertainty over the fiscal picture, which is compounded by the political uncertainty of the November elections. And the uncertainty is substantial for two reasons. First, Republicans and Democrats are on very divergent positions, to the point that the presidential election is being cast as a referendum on small versus big government. Second, the underlying fiscal challenge is substantial; just look at the Congressional Budget Office’s scenarios. Liquidity is not the problem, and more liquidity is unlikely to be the solution. For now, it’s the best we get, and in the short term, it is better than the disappointment we could have seen after the Fed had raised expectations. But we know from the movies that sequels can often have diminishing returns. I think that is even truer for QE.

Monday, 17 September 2012

“There's no doubt about that, and there's never been an exporter that doesn’t want to lower exchange rate, no matter what level” he said.

Mr Joyce said that the dollar “may” hit parity with the US dollar.

“But that'll be because the US is completely in the toilet,” he said.

“Ultimately it's market fundamentals.

“And so ultimately nobody's going to bid the New Zealand dollar beyond what they consider it should be at.

“Now even if it bounced through say for example the quantitative easing that's coming through at the moment, it will come back again.

“Because fundamentally the value of the New Zealand dollar is determined by what the world believes is the future of the New Zealand economy, and if they bid it up too high, then they will look at it and say well actually we've bid it up too high, and we'll bid it down again.”

He's basically right. And when will we get over the mercantilist obsession with exporting that underlies the 'dollar must be lower to help exporters' arguments?

The NBR goes on,

Mr Joyce said the private members bill being introduced by Winston Peters to alter the Reserve Bank’s objectives was a “snake oil solution that would achieve nothing.”

Actually it may achieve something, but it would be all bad.

And,

Mr Peters wants the primary function of the Reserve Bank to be broadened to include other critical macro-economic factors such as the rate of growth and export growth.

So given you need as many instruments as objectives but new instruments is Winston going to give the Reserve Bank?

Sunday, 16 September 2012

Fifty-eight years ago, Harberger (1954) estimated that the costs of monopoly, which resulted from misallocation of resources across industries, were trivial. Others showed that the same was true for tariffs. This research soon led to the consensus that monopoly costs are of little significance—a consensus that persists to this day.

This paper reports on a new literature that takes a different approach to the costs of monopoly. It examines the costs of monopoly and tariffs within industries. In particular, it examines the histories of industries in which a monopoly is destroyed (or tariffs greatly reduced) and the industry transitions quickly from monopoly to competition. If there are costs of monopoly and high tariffs within industries, it should be possible to see those costs whittled away as the monopoly is destroyed.

In contrast to the prevailing consensus, this new research has identified significant costs of monopoly. Monopoly (and high tariffs) is shown to significantly lower productivity within establishments. It also leads to misallocation within industries: Resources are transferred from high- to low-productivity establishments.

From these histories, a common theme (or theory) emerges as to why monopoly is costly. When a monopoly is created, “rents” are created. Conflict emerges among shareholders, managers and employees of the monopoly as they negotiate how to divide these rents. Mechanisms are set up to split the rents. These mechanisms are often means to reduce competition among members of the monopoly. Although the mechanisms divide rents, they also destroy them (by leading to low productivity and misallocation).

Recently, a new literature has been developing that looks within industries to try and estimate the cost of monopolies and tariffs. This literature examines the histories of industries in which a monopoly is destroyed and the industry transitions quickly from monopoly to competition, as well as the histories of industries that rapidly moved the opposite way, from competition to monopoly. The notion being that if there are costs of monopoly, those costs should be destroyed when the monopoly is destroyed. Similarly, when an industry is monopolised, costs should be created. In both cases, costs should be apparent when comparing the industry before and after monopolisation.

Industries in the U.S., such as transportation and the manufacturing of sugar, iron ore and cement, have been studied utilising the new approach. The historical records of these industries show that there are costs of monopoly and tariffs within industries. The studies have shown that monopoly led to, among other costs, the following:

1. Low productivity at each factory. That is, for any given amount of inputs, monopoly meant that less output was produced than under competition.

2. Misallocation of resources between high- and low-productivity factories. That is, monopoly led to resources (capital, labor, etc.) being transferred from productive factories to unproductive factories. Again, this misallocation occurs within an industry and is different from the misallocation that Harberger considered.

These findings are interesting since in standard monopoly theory productive efficiency is maintained while the deadweight loss of monopoly comes from allocative inefficiency.

When measured it is found that

[i]n sharp contrast to Harberger’s finding, [ ... ] the welfare costs associated with monopoly and tariffs are not small. The consequence of cases (1) and (2) above is that industry output could have been produced with fewer inputs. One way to measure the loss, then, is to calculate the value of the “wasted” inputs. The histories of these industries show that as monopoly was destroyed in each, productivity at each factory soared. Doubling of productivities in a few years was common. The value of the wasted inputs was as much as 20 percent to 30 percent of industry value added.

“The rise in the number or exorcists from four to more than 120 over the course of 15 years in Poland is telling,” Father Aleksander Posacki, a professor of philosophy, theology and leading demonologist and exorcist told reporters in Warsaw at the Monday launch of the Egzorcysta monthly.

Ironically, he attributed the rise in demonic possessions in what remains one of Europe’s most devoutly Catholic nations partly to the switch from atheist communism to free market capitalism in 1989.

“It’s indirectly due to changes in the system: capitalism creates more opportunities to do business in the area of occultism. Fortune telling has even been categorised as employment for taxation,” Posacki told AFP.

“If people can make money out of it, naturally it grows and its spiritual harm grows too,” he said, hastening to add authentic exorcism is absolutely free of charge.

Saturday, 15 September 2012

From VoxEU.org comes this short audio in which John Van Reenen talks to Viv Davies about fiscal consolidation during a depression. They discuss Van Reenen's recent work on quantifying the costs and benefits of delaying austerity measures until recovery is clearly established. They also discuss whether austerity has gone to far. Van Reenen presents the case for a more federal Europe.

Fiscal adjustments based upon spending cuts are much less costly in terms of output losses than taxbased ones. In particular, spending-based adjustments have been associated with mild and short-lived recessions, in many cases with no recession at all. Instead, tax-based adjustments have been followed but prolonged and deep recessions. The difference is remarkable in its size and cannot be explained by different monetary policies during the two type of adjustments. In fact, we find that the mild asymmetric (and lagged) response of short-term rates cannot explain the difference between the two types of adjustments: heterogeneity in the response of monetary policy appears with a lag of one to two years, while the heterogenous response of output growth to EB and TB adjustments is immediate. We find that the heterogeneity in the effects of the two types of fiscal adjustment (tax-based and spending-based) is mainly due to the response of private investment, rather than that to consumption growth. Interestingly, the responses of business and consumers’ confidence to different types of fiscal adjustment show the same asymmetry as investment and consumption: business confidence (unlike consumer confidence) picks up immediately after expenditure-based adjustments.

Thus fiscal consolidations tend to have much more favourable effects on the economy if they are done via spending cuts alone, not via increased taxation.

Here it is: search for the word "Austrian" in your research papers, delete it, and rewrite where necessary. Next ask yourself whether what's left can stand on its own merits. Would your fellow Austrians find it interesting and persuasive without the help of all the winking, nodding, and fraternal handshaking aimed at declaring yourself one of the team, and at thereby evading friendly fire? Would they find the conclusions firmly attached by a series of solid links to some indisputable premises, as they should if you are really a competent praxeologist? Are they likely to find the evidence you supply persuasive, should you be so bold as to offer such? Would they, in short, find merit in what you've written even if they had no reason to suspect that you are one of the gang, or even a fellow traveler? If not, then your paper is good for nothing but joining a club that is, face it, all too willing to have you as a member.

But being able to win over Austrians without declaring yourself one of them is the least of it. The more important question you need to ask is, "Can my stealth-Austrian paper not only sneak past mainstream radars, but do some persuading once across enemy lines?" It surely will not be less persuasive than it would be with all that Austrian flag-waving, since the flags might as well be bright red so far as the rest of the profession is concerned. But if it still can't persuade at least some persons who aren't pals of yours at the Mises Institute or at GMU or at some other Austrian hang-out, what good is it?

Persuading non-Austrian economists with what are, in substance, "Austrian"-style arguments is, admittedly, rough going: all too many so-called economists today are mere technicians who care only for the latest mathematical and statistical gimmicks, and give not a jot for genuine economics. But there are thank goodness also plenty of real economists who aren't Austrians and who don't want to hear about Austrian economics, but are willing to hear any good argument and to be persuaded by it and by evidence that seems to support it. Persuading them is hard too. It's also every economist's job.

I'm not sure just how true this is today "many so-called economists today are mere technicians who care only for the latest mathematical and statistical gimmicks". Once I think it was true but since the 70s the approach to doing economics by most economists has been changing. Less of the maths for maths sake and more of, what Selgin calls, "genuine economics". As an example I have written the following in a working paper on the theory of the firm:

A final point about the models of the firm discussed in this essay is that they highlight a general issue to do with post-1970 microeconomics, that is, the retreat from the use of general equilibrium (GE) models. All the models considered above are partial equilibrium models, but in this regard the theory of the firm is no different from most of the microeconomic theory developed since the 1970s. Microeconomics such as incentive theory, incomplete contract theory, game theory, industrial organisation etc, has largely turned its back, presumably temporarily, on GE theory and has worked almost exclusively within a partial equilibrium framework.

Sifting through the report, BERL and NZ First's recommendation to scrap inflation targeting is based on a fear of "hot money". As they say, between 2002 and 2007, New Zealand saw a significant lift in private borrowing, much of which was sourced from overseas. They state that the lift in foreign lending was due to the higher interest rates in New Zealand.

However, this is only part of the story - a loan only appears when there is both a lender and a borrower. To understand the sharp increase in private debt levels, we need to ask what was driving up private sector demand for credit during this period. When we approach the issue in this way, we can recognise that the demand for credit was not the fault of interest rates being "too high".

As the Reserve Bank and, more recently, the NZIER have stated, the key issue in New Zealand over the past 40 years has been the high real exchange rate (the exchange rate adjusting for changes in prices between countries). Our persistent current account deficits and high level of net liabilities indicate that there is a significant issue in the New Zealand economy that needs to be addressed - but this is not a consequence of the PTA, inflation targeting, or interest rate setting.

The purpose of inflation targeting is to help wage and price setters set expectations of what will happen to the price of goods and services over time. The Reserve Bank controls inflation by announcing its target and adjusting the official cash rate in a way that is consistent with changes in saving and investment behaviour within the economy. The reason interest rates have had to be higher in New Zealand is due to the economic fundamentals that have driven up debt - blaming the Reserve Bank involves getting the explanation the wrong way around!

I note that in the BERL/NZ First report they say

The present Act’s primary function of controlling rising price inflation was critical when it was enacted in 1989. The world has since successfully beaten inflation. Therefore the Act is redundant.

Let us assume, for the sake of the argument, that the world has in fact beaten inflation. But so what? It may have beaten inflation now but what about the future? Isn't the point of the RB's focus on inflation that it remains beaten and we don't get any future periods of inflation?

The BERL/NZ Frist report goes on to say,

To grow the economy, the Reserve Bank actions could be used to encourage efficient production of more goods and services. This will ensure our businesses and workers are world-competitive.

How can the RB make firms efficient? What can the bank do to encourage efficient production? What does "ensure our businesses and workers are world-competitive" mean and why do we want it? This looks like more of the mercantilist "exports good, imports bad" line of (non)thinking.

Tellingly there is also no indication in the report as to how the RB's function should be changed. But it must be changed and these unknown changes would, apparently, have great benefits for New Zealand.

Thursday, 13 September 2012

The incentives to carmakers can also be weird. The original standards for fuel economy in the 1970s exempted light trucks, which were a small share of the market. That decision was critical to the explosive growth of the S.U.V. In 1973, light trucks amounted to 3 percent of new vehicle sales. Today they account for half.

A new Milken Institute report purports to show that “[t]he benefit from every dollar invested by National Institutes of Health (NIH) outweighs the cost by many times. When we consider the economic benefits realized as a result of decrease in mortality and morbidity of all other diseases, the direct and indirect effects (such as increases in work-related productivity) are phenomenal.” There are so many problems with the study I hardly know where to begin.

Global growth is slowing – especially in advanced-technology economies. This column argues that regardless of cyclical trends, long term economic growth may grind to a halt. Two and a half centuries of rising per-capita incomes could well turn out to be a unique episode in human history.

As multilateral attempts for climate-change mitigation stall, the two-way relationship between trade and climate change is likely to come under further scrutiny. This column explains how liberalised trade has several climate-related consequences. It argues that trade policy could enforce mitigation policies but that multilateral conventions are crucial in preventing undesired protectionist consequences.

UK labour productivity is falling. Today's figures show that total hours worked have risen 1.6% in the last year, whilst the NIESR estimates that GDP fell 0.2% in the time. GDP per hour is now 4.5% below 2007Q4's level. Had productivity continued to grow at its 1977-2007 rate, it would be 10.8% higher.

Big data has the potential to revolutionize management. Simply put, because of big data, managers can measure, and hence know, radically more about their businesses, and directly translate that knowledge into improved decision making and performance. Of course, companies such as Google and Amazon are already doing this. After all, we expect companies that were born digital to accomplish things that business executives could only dream of a generation ago. But in fact the use of big data has the potential to transform traditional businesses as well.

As many have observed the employment report for August released today was disappointing news, but it really is a continuation of a steady stream of bad employment news that has been the story of this recovery since its beginning. The economy is growing too slowly to increase jobs at a pace that matches the growing population—unlike previous recoveries from deep recessions.

Wednesday, 12 September 2012

John H. Cochrane and Harald Uhlig are interviewed by Gideon Magnus (Chicago PhD) at Morningstar. They talk about the foundations of money, fiscal theory, monetary policy, European debt problems, etc. About the video Cochrane writes

The video starts a little abruptly, as it left out Gideon's thoughtful introduction (it's in the Magazine) and framing question:

Gideon Magnus: I want to discuss the value of money and the idea that money is valued similarly to any other asset. Are there really assets backing money? If so, what are they? John, please explain.

Tuesday, 11 September 2012

Brian Nosek of the University of Virginia talks with EconTalk host Russ Roberts about how incentives in academic life create a tension between truth-seeking and professional advancement. Nosek argues that these incentives create a subconscious bias toward making research decisions in favor of novel results that may not be true, particularly in empirical and experimental work in the social sciences. In the second half of the conversation, Nosek details some practical innovations occurring in the field of psychology, to replicate established results and to publicize unpublished results that are not sufficiently exciting to merit publication but that nevertheless advance understanding and knowledge. These include the Open Science Framework and PsychFileDrawer.

Monday, 10 September 2012

Steven Keen is going around the country right now explaining the many, many evils, as he see them, of standard economics. That the majority of economists don't agree with Keen will not come as any surprise to most people. One person who has done a great service in debunking Keen's ideas on "Debunking Economics" is Professor Christopher Auld. Back in 2002 he wrote on article on "Debunking Debunking Economics". Unfortunately this article has not been available online for some time but now Professor Auld has kindly allowed me to make it available once more. A copy is available here for those interested.

For some local comments on Keen's ideas see Anti-Dismal here and here and TVHE here and here.

Immigration can be good for you. Yes Winston, those foreigners can in fact help the economy.

Almost a century and a half after the first large migration wave of the late 19th century, those places where migrants settled in big numbers are significantly better off than those which were virtually untouched by the migration wave. Migration is the only factor related to the period of the migration waves which is still strongly connected to current levels of development. Factors such as the level of income of the county at the end of the 19th century or early 20th century, the level of education of the population, the percentage of black population, the participation of women in the labor force, or whether the county was rural or urban – which would have determined the attractiveness of a county to migrants in the first place – have no bearing whatsoever on the current level of development of US counties. While their influence on a county's wealth and, consequently, on its economic dynamism has disappeared long ago, migration has left an imprint which still affects economic performance.

From VoxEu.og comes this audio in which Xavier Freixas talks to Viv Davies about the recent changes in the European banking resolution regime. They discuss the tension between ex ante incentives and ex post efficiency in banking. Freixas argues that the best way to analyse a bank resolution situation is to think of it as a bargaining game between the bank's shareholders and the treasury.

The Science Media Centre provides an expert round-up of commentary on a new paper finding, unsurprisingly, that people like branded tobacco packs more than they like plain packs. What's more relevant for policy, and what we really have no clue about, is whether changing the branding on packages has effects on aggregate sales or whether it works instead to break brand loyalty and move consumers to lower-cost no-name packs.

Here’s a pilot for a new project I’ve started–a chartcast–a visual discussion of charts and data. This first episode is a conversation with John Taylor on the economic recovery and how it compares to past recoveries.

This column looks at the use of incentive schemes, such as performance-related pay, in the British Labour government between 1997 and 2010. It finds that cash incentives do matter, but that their design is critical.

Public spending on large-scale projects is often a way of sneaking in protectionism through the back door and there are many cases of outright corruption. With the EU and US pushing hard for more open public procurement elsewhere in the world, this column asks just how open these markets are, particularly in the EU, which claims to have the most open market in the world.

Botticini and Eckstein document that Jews were not more educated before 1st century A.D. and most probably before 7th century A.D. Rather, as Solo Baron’s classic A Social and Religious History of the Jews also argues, the change in Jewish educational practices and institutions came out of an internal conflict about the control of Jewish society between two groups, the Pharisees and the Sadducees.

Do economists reach a conclusion on a given policy issue? One way to answer the question is to survey economists at large. Another is to look at the published judgments of economists who have gone on the record. Relative to an anonymous survey, going on the record makes for much greater accountability, and presumably more personal responsibility. I discuss eleven studies of economists’ published judgments. Several of them show greater support for liberalization than found among economists at large. This is offered as evidence of what I call the forsaken-liberty syndrome. I discuss the nature of this test of such syndrome and point to some of the larger questions to which it relates.

At one point Klein writes,

But, first, we look at three cases on which the on-record support for liberalization is high but about the same as for at-large economists. On the governmental subsidization of sports franchises, stadiums, and mega-events, Coates and Humphreys (2008) find a strong consensus among on-record economists. Meanwhile, a sample of at-large economists were asked by Whaples (2006) about whether “Local and state governments in the U.S. should eliminate subsidies to professional sports franchises.” Eighty-five percent either agreed (strongly or simply), and only five percent disagreed. It is fair to say that on-record and at-large economists are about the same on this issue. In my humble judgment, sports subsidies are an exemplary case of corporate welfare and public foolishness. On this matter, the Journal of Economic Perspectives has taken care to educate economists at large, publishing a fine analysis by Siegfried and Zimbalist (2000).

But what influence will this consensus have on local or national politicians? I'm guessing very little. :-(.

In chapter 3 the authors focus on the "incomplete contracts" or "property rights" approach to theory of the firm.

In section 3.2 Evans, Guthrie and Quigley write,

"Transaction-cost explanations for contractual incompleteness are unsatisfactory, because there is more incompleteness than can be accounted for by transaction costs (specifically, because there are many elements where contracting is not possible rather than just more costly than the alternative). Examples include situations where information is symmetric, but key contractible elements are not verifiable by either party. Even when transaction costs are zero, incomplete contracts may arise because parties cannot observe relevant economic variables, cannot verify those variables to a legal standard of proof, or prefer not to disclose information about themselves that would be required for a complete contract."

Now this I don't get. Aren't all cases of incompleteness driven by some form of transaction cost? If we think of transactions costs as the costs of market transactions then in a zero transaction costs world contract would be complete since such contracts would be cost-less to write. Coase has written,

"The solution to the puzzles that I took with me to America [Why are there firms?] was, as it turned out, very simple. All that was needed was to recognize that there were costs of carrying out market transactions and to incorporate them into the analysis, something which economists had failed to do. A firm had therefore a role to play in the economic system if it were possible for transactions to be organized within the firm at less cost than would be incurred if the same transactions were carried out through the market. The limit to the size of the firm would be set when the scope of its operations had expanded to the point at which the costs of organizing additional transactions within the firm exceeded the costs of carrying out the same transactions through the market or in another firm."

The idea that when

"information is symmetric, but key contractible elements are not verifiable by either party"

we get incomplete contracts seems odd. If information is known to the contracting parties, what does it not being verifiable to the parties mean? If the information is not verifiable to a third party, e.g. a court, then a contract can be incomplete, but this is a different thing from information being non-verifiable to the contracting parties. If information is symmetric in that it is unknown to all contracting parties, will a contract be incomplete? The answer to this depends on whether or not the information is verifiable to a third party. If it is then the contracting parties can contract on it by just getting the third party to verify the information. If the information is not verifiable to the third party then a contract will be incomplete. But the reason for the incompleteness is not because the information is unknown to the contracting parties but rather because it is non-verifiable to the third party.

The idea that,

"Even when transaction costs are zero, incomplete contracts may arise because parties cannot observe relevant economic variables, cannot verify those variables to a legal standard of proof, or prefer not to disclose information about themselves that would be required for a complete contract"

also seems odd. I'm not sure what they mean when they say that information is non-observable to the contracting parties. If this means a moral hazard/adverse selection type framework then contract are comprehensive and not incomplete. As Hart explains it,

"Although the optimal contract in a standard principal-agent model will not be first-best (since it cannot be conditioned directly on variables like effort that are observed by only one party), it will be 'comprehensive' in the sense that it will specify all parties' obligations in all future states of the world, to the fullest extent possible. As a result, there will never be a need for the parties to revise or renegotiate the contract as the future unfolds. The reason is that, if the parties ever changed or added a contract clause, this change or addition could have been anticipated and built into the original contract."

and

"One would also not expect to see any legal disputes in a comprehensive contracting world. The reason is that, since a comprehensive contract specifies everybody's obligations in every eventuality, the courts should simply enforce the contract as it stands in the event of a dispute."

Clearly such a contract is not incomplete. If Evans, Guthrie and Quigley mean that neither of the contracting parties can observe the variable then we are in case discussed above in; which the important point is the verifiability of the variable to a third party. If the contracting parties,

"cannot verify those variables to a legal standard of proof"

then contracts could be incomplete. Non verifiability of information to a third party such as a court is the standard argument as to why contracts are complete. But this argument can be countered by the Maskin and Tirole critique. Maskin and Tirole argue that information which is observable to the contracting parties can be made verifiable (to a third party) by the use of ingenious revelation mechanisms. The contracting parties write into their contract a game which when played gives the appropriate incentives for them to truthfully reveal their private information in equilibrium. This undermines the non-verifiability approach to incomplete contracts.

If some of the contracting parties,

"prefer not to disclose information about themselves that would be required for a complete contract"

then its hard to see that we are in a zero transaction costs world. Isn't not providing information the same as saying the costs of contracting on that information are infinite? This looks like a very large transaction cost!

Under section 3.3 Evans, Guthrie and Quigley write,

"The starting point for this approach to the theory of the firm is the incompleteness of contracts. Since humans are boundedly rational, not all issues of relevance to a contract can be anticipated at the time of writing the contract."

But Oliver Hart argues,

"In the last few years, a literature has developed on the theory of incomplete contracts, and on applications of this theory to the understanding of organizations, such as firms. In this paper, I will argue that, while transaction costs of various sorts are a crucial ingredient of this literature, bounded rationality in the sense that agents have limited cognitive, computational or comprehension skills is not."

Given that Evans, Guthrie and Quigley argue that incompleteness is important for contracts there is one question that they need to answer. As it is possible for the contracting parties to fill any gaps in their contract as they go along, Why is contractual incompleteness important? The reason is that renegotiation itself imposes costs. These cost can be both ex post, incurred at the time of renegotiation, or ex ante, incurred in anticipation of renegotiation.

In section 3.4 the point is made that,

"The incomplete contracting perspective embodied in this example represents a sharp break with the earlier transaction cost-based literature on the firm. Whereas incomplete contracts imply that inefficiencies arise because it was hard to foresee and contract about the uncertain future, earlier literature tended to take a “complete contracts” perspective in which imperfections arise as a result of moral hazard and asymmetric information."

I would read the "earlier literature" comment to refer to the transaction cost literature. But incomplete contracts are a central feature of the transaction cost approach. As Hart and Moore (2007) explain

Section 3.5 of the paper looks at the link between transaction costs, incentive-based theories and incomplete contracts. When discussing incentive-based theories of the firm Evans, Guthrie and Quigley write,

Incentive-based theories of the firm have their foundation in the analysis of the incentive problem between a principal and an agent. This approach assumes that there are many tasks and many instruments associated with the agency problems in a firm, and asset ownership is merely one of the instruments. Papers in this paradigm consider two ways to structure the agency problem: (i) where the agent does not own the asset (is an employee) and therefore has incentives provided by being paid on measured performance, and (ii) where the agent does own the asset (is an independent contractor) and receives both a payment based on measured performance and the value of the asset after production occurs.

This approach to the theory of the firm has in effect focused on the claimed distinction between the low-powered incentives associated with employment, and the high-powered incentives associated with contracting. Employees require low-powered incentives because they are not distracted by the contractor’s incentives to increase the value of the assets used for production. More generally, joint optimisation over asset ownership and contract parameters determines whether to conduct activity within the firm or outside.

The incentive-system theory of the firm is therefore related to the incomplete contracts literature, both in its use of ownership as an instrument and in its ability to provide a unified account of the costs and benefits of integration.

Incentive theory is normally thought of as a comprehensive contracts based theory and much of the literature is of this form. Think of moral hazard models. Incentive theory of this type is probably best understood as a extension of the neoclassical theory of the firm that inquiries into the incentive conflicts that may hinder the firm from reaching its production possibility frontier. But not all incentive theory is of this kind. While its not exactly clear what set of papers is being referred to above. I assume that papers like Bengt Holmstrom and Paul Milgrom’s 1994 paper, "The Firm as an Incentive System" fall into this group.

Holmstrom and Milgrom here stress the importance of viewing the firm as "a system", specifically as a coherent set of complementary contractual arrangements which mitigate incentive conflicts. In their opinion, it is misleading to focus on any one single aspect of the coherent whole: the firm is characterized by the employee not owning the assets, by the employee being subject to a low-powered incentive scheme, and by the employee being subject to the authority of the employer. These “incentive instruments” are complementary: For example, in the presence of measurement costs, it is important that a person who does not own the assets which he uses is not subject to high-powered incentives, since he then is likely to care too little for the assets. Likewise, low-powered incentives make it important for the employer to be able to exercise authority over the use of the employee’s time, since the employee will lack the proper incentive to be productive. Due to this complementarity it is logical that independent contracting has the exact opposite constellation of instruments from the employment relationship.

The choice between the two different incentive systems depends importantly on the extent to which every dimension of a person’s contribution can be measured. When an important dimension is unmeasurable, it might be counterproductive to remunerate the person through a high-powered incentive scheme since the person is likely to allocate too little attention on the unmeasurable activity. Thus, according to Milgrom and Holmstrom lack of measurability is an important variable determining the size of the firm [...]. (Foss 2000)

An important point to note about this paper is that it is not only a principal-agent theory but also an incomplete-contracting theory. So the relationship between the two theories can be a very close one, the theories are not just related but can be usefully merged.

In the past I have argued that transaction cost and property rights theories are "orthogonal" to each other. In a discussion of the differences between the Grossman-Hart-Moore (GHM) theory of the firm and the transaction-cost approach, Williamson (2000, pp. 605–606) argues that the most important difference between them is that GHM introduce inefficiencies at the ex ante investment stage while the transaction-cost approach emphasises that ex post haggling and maladaptation drive inefficiencies. There are no ex post inefficiencies in GHM due to their assumption of common knowledge and ex post costless bargaining. Gibbons (2010, p. 283) explains it this way:

‘[t]he model in question is Grossman and Hart’s (1986), which explores an alternative to Williamson’s (2000, p. 605) emphasis that “maladaptation in the contract execution interval is the principal source of inefficiency.” Instead, in the Grossman-Hart model, there is zero maladaptation in the contract execution interval, and the sole inefficiency is in endogenous specific investments.

It is striking how different the logic of inefficient investment can be from the logic of inefficient haggling. In their pure forms envisioned here, the two can be seen as complements. For example, the lock-in necessary for Williamson’s focus on inefficient haggling could result from contractible specific investments chosen at efficient levels. But by assuming efficient bargaining and hence zero maladaptation in the contract execution interval, Grossman and Hart focused attention on non-contractible specific investments and hence discovered an important new determinant of the make-or-buy decision: in the Grossman-Hart model, an important benefit of non-integration is that both parties have incentives to invest; in Williamson’s argument, an important cost of non-integration is inefficient haggling. In short, the two theories are simply different’.

This emphasis on ex post haggling and maladaptation can be interpreted as reflecting a view thatinternal organisation is better at reconciling the conflicting interest of the parties to a transaction and facilitating adaptation to changing supply and demand conditions when such cost are high.

One point worth making is that the reference point approach (not much discussed in the Evans, Guthrie and Quigley paper) to the firm that has grown in very recent times out of the property rights approach can be seen as a move away from the ex ante GHM approach and back towards transaction cost thinking in so much as contracting is not perfectly contractible ex post.

Chapter 4 of the Evans, Guthrie and Quigley paper is on Real Options and Investment.

Thursday, 6 September 2012

Ludwig von Mises's Human Action is still the key summation of the Austrian school of economics. In it, Mises describes certain conclusions, those of praxeology, as having a special epistemological status: They are deductive conclusions that are not subject to falsification. In plain language, they cannot fail to be true: While the findings of history may always be subject to revision—if new evidence is discovered, say, or if old evidence is found to be unreliable—the conclusions of praxeology will always be valid.

This move has brought critics of Austrian economics to cry foul. Such critics are apt to see the Austrian school as a group of almost cartesian rationalists, deducing economic theorems that, while perhaps interesting in their own right, can by definition have no purchase in the real world of economic policy and the study of human events.

Professor Steven Horwitz begs to differ. In his lead essay he argues that logical deduction has a strictly limited role to play in economics, and that Austrian economists are indeed making important empirical contributions to the field. Further, he argues that the Austrian school stands to teach mainstream economics a good deal about how to conduct empirical observations and interpret them properly. To discuss with Horwitz, we have invited three other distinguished economists, each of whom has been influenced by the Austrian school — while ultimately settling elsewhere methodologically: Bryan Caplan, George A. Selgin, and Antony Davies.

The new public management of the 1980s was based in part on a range of important new insights about the role of transaction and agency costs arising from contractual incompleteness in defining the boundaries of the firm and the governance relationships within it. In this paper, we consider the literature of the last 25 years which extends our understanding of allocations of ownership rights and the boundaries of the firm as responses to contractual incompleteness. From this perspective, ownership represents an allocation of control rights to those with the potential to make the most important (value-enhancing) relationship-specific investments. We provide an outline of this modern approach to contractual incompleteness, illustrate its application to a range of issues in public and private ownership, investment, governance and decision-making, and provide suggestions about the impact that this approach might have on the scope, structure and management of the public sector in the 21st century.

I have done a quick read of the first three chapters of the paper. They do managed to get the references to the chapters of the paper wrong in the Introduction. When they refer to Chapter 2 they mean Chapter 3 and when they refer to Chapter 3 they mean Chapter 4 and so on.

In Chapter 2 they review the microeconomic foundations of the state-sector reform in New Zealand after 1984. The chapter looks at the economic theories that were important in the formulation and implementation of New Zealand's public management framework after 1984. The major influences from economics were the new institutional economics, principal-agent theory, transaction costs and information economics. At one point Evans, Guthrie and Quigley write,

Since the issue was first raised by Ronald Coase in the 1930s, academic economists have developed increasingly sophisticated theories of why firms exist, why some economic activity is organised within the market and some is organised within firms, and how the efficient boundaries of firms are determined.

Some economists would date the start of the modern theory of the firm from Knight (1921) rather than Coase (1937). Demsetz (1988: 244) goes so far as to state,

"[ ... ] it can be said without hesitation that Knight launched the modern theory of the firm in 1921".

I do sometimes wonder just how sophisticated even the modern theories of the firm really are. As Oliver Hart has written,

"[a]n outsider to the field of economics would probably take it for granted that economists have a highly developed theory of the firm. After all, firms are the engines of growth of modern capitalistic economies, and so economists must surely have fairly sophisticated views of how they behave. In fact, little could be further from the truth. Most formal models of the firm are extremely rudimentary, capable only of portraying hypothetical firms that bear little relation to the complex organizations we see in the world. Furthermore, theories that attempt to incorporate real world features of corporations, partnerships and the like often lack precision and rigor, and have therefore failed, by and large, to be accepted by the theoretical mainstream". (Hart 1989: 1757).

In 2008 Hart said of the 1989 quote

"[t]he language of 1989 is strong, and I'd probably tone it down a bit now. There's been a lot of work in the last twenty years, and some progress. However, we are still not at the point where we have good models of the internal organization of large firms".

In section 2.3 Evans, Guthrie and Quigley write,

"The Treasury (1987:37-39) set out a transaction-cost and incentive-based theory of the limitations of state ownership. It motivated the benefits of private ownership by drawing attention to the agency problems associated with information acquisition and performance management under state ownership given the complex objectives of state entities and the absence of market monitoring and competition."

The problem with the complete contracts approach to ownership was not shown until the late 1980s when the ownership neutrality theorems started to appear. These theorems give conditions, in particular complete contracts, under which private or public ownership of productive assets is irrelevant for the allocation of resources. As Hart (2003) sums it up,

"One of the insights of the recent literature on the firm is that, if the only imperfections are those arising from moral hazard or asymmetric information, organisational form - including ownership and firm boundaries - does not matter: an owner has no special power or rights since everything is specified in an initial contract (at least among the things that can ever be specified). In contrast, ownership does matter when contracts are incomplete: the owner of an asset or firm can then make all decisions concerning the asset or firm that are not included in an initial contract (the owner has 'residual control rights')."

In section 2.4 Evans, Guthrie and Quigley outline what they see as some of the unresolved issues with public management,

The boundaries between the state and the private sector, including:

the case for public investment where the private sector is unwilling to invest, and

the allocation of ownership and service delivery between the private and public sectors.

The place of competition in the public sector and, in particular:

the role of competition in promoting greater efficiency in the delivery of services and the management of assets within the public sector, and between the public and private sectors, and

the balance between competitive discovery of efficient solutions to operational and organisational problems, and single national approaches to investment and public-sector infrastructure.

The need for stronger individual and organisational incentives for performance, and more effective mechanisms for the measurement and monitoring of that performance. Gill and Hitchener (2010:498) argue that while the vertical structures of accountability created under the Public Finance Act and the State Sector Act were designed to allow greater scrutiny of performance of ministers, chief executives and their departments or agencies, in practice there is relatively little use of performance information by central agencies, other than as a measure of bottom-line performance when things go wrong, and that this has tended to reinforce rather than mitigate the “well known bureaucratic pathologies of public organisations, in particular risk-averse, rule-driven behaviour.”

The effectiveness of the governance and management of the public sector as a whole, including the role of advisory and governance boards, the central monitoring agencies, and the challenge of producing more effective mechanisms for solving problems and developing innovative new approaches to policy where policy issues span the mandates of multiple teams and multiple government organisations. Scott et al (2010) point out that there have been consistent concerns about the ability of the public sector to deliver quality and innovative policy advice on the big issues that are of relevance to multiple departments and entities.

The rest of the paper looks at the recent academic literature to search for answers to these problems.