Sunday, October 30, 2011

I have a column coming out in Bloomberg Views sometime this evening (US time). It touches on the European debt crisis and the issue of outstanding credit default swaps. This post is intended to provide a few more technical details on the study by Stefano Battiston and colleagues, which I mention in the column, showing that more risk sharing between institutions can, in some cases, lead to greater systemic risk. [Note: this work was carried out as part of an ambitious European research project called Forecasting Financial Crises, which brings together economists, physicists, computer scientists and others in an effort to forge new insights into economic systems by exploiting ideas from other areas of science.]

The authors of this study start out by noting the obvious: that credit networks can both help institutions to pool resources to achieve things they couldn't on their own, and to diversify against the risks they face. At the same time, the linking together of institutions by contracts implies a greater chance for the propagation of financial stress from one place to another. The same thing applies to any network such as the electrical grid -- sharing demands among many generating stations makes for a more adaptive and efficient system, able to handle fluctuations in demand, yet also means that failures can spread across much of the network very quickly. New Orleans can be blacked out in a few seconds because a tree fell in Cleveland.

In banking, the authors note, Allen and Gale (references given in the paper) did some pioneering work on the properties of credit networks:

... in their pioneering contribution Allen and Gale reach the conclusion that if the credit network of the interbank market is a credit chain – in which each agent is linked only to one neighbor along a ring – the probability of a collapse of each and every agent (a bankruptcy avalanche) in case a node is hit by a shock is equal to one. As the number of partners of each agent increases, i.e. as the network evolves toward completeness, the risk of a collapse of the agent hit by the shock goes asymptotically to zero, thanks to risk sharing. The larger the pool of connected neighbors whom the agent can share the shock with, the smaller the risk of a collapse of the agent and therefore of the network, i.e. the higher network resilience. Systemic risk is at a minimum when the credit network is complete, i.e. when agents fully diversify individual risks. In other words, there is a monotonically decreasing relationship between the probability of individual failure/systemic risk and the degree of connectivity of the credit network.

This is essentially the positive story of risk sharing which is taken as the norm in much thinking about risk management. More sharing is better; the probability of individual failure always decreases as the density of risk-sharing links grows.

This is not what Battiston and colleagues find under slightly more general assumptions of how the network is put together and how institutions interact. I'll give a brief outline of what is different in their model in a moment; what comes out of it is the very different conclusion that...

The larger the number of connected neighbors, the smaller the risk of an individual collapse but the higher systemic risk may be and therefore the lower network resilience. In other words, in our paper, the relationship between connectivity and systemic risk is not monotonically decreasing as in Allen and Gale, but hump shaped, i.e. decreasing for relatively low degree of connectivity and increasing afterwards.

Note that they are making a distinction between two kinds of risk: 1. individual risk, arising from factors specific to one bank's business and which can make it go bankrupt, and 2. systemic risk, arising from the propagation of financial distress through the system. As in Allen and Gale, they find that individual risk DOES decrease with increasing connectivity: banks become more resistant to shocks coming from their own business, but that systemic risk DOES NOT decrease. The latter risk increases with higher connectivity, and can win out in determining the overall chance a bank might go bankrupt. In effect, the effort on the part of many banks to manage their own risks can end up creating a new systemic risk that is worse than the risk they have reduced through risk sharing.

There are two principle elements in the credit network model they study. First is the obvious fact that resilience of an institution in such a network depends on the resilience of those with whom it shares risks. Buying CDS against the potential default of your Greek bonds is all well and good as long as the bank from whom you purchased the CDS remains solvent. In the 2008 crisis, Goldman Sachs and other banks had purchased CDS from A.I.G. to cover their exposure to securitized mortgages, but those CDS would have been more or less without value had the US government not stepped in to bail out A.I.G.

The second factor model is very important, and it's something I didn't have space to mention in the Bloomberg essay. This is the notion that financial distress tends to have an inherently nonlinear aspect to it -- some trouble or distress tends to bring more in its wake. Battiston and colleagues call this "trend reinforcement, " and describe it as follows:

... trend reinforcement is also quite a general mechanism in credit networks. It can occur in at least two situations. In the first one (see e.g. in (Morris and Shin, 2008)), consider an agent A that is hit by a shock due a loss in value of some securities among her assets. If such shock is large enough, so that some of A’s creditors claim their funds back, A is forced to fire-sell some of the securities in order to pay the debt. If the securities are sold below the market price, the asset side of the balance sheet is decreasing more than the liability side and the leverage of A is unintentionally increased. This situation can lead to a spiral of losses and decreasing robustness (Brunnermeier, 2008; Brunnermeier and Pederson, 2009). A second situation is the one in which when the agent A is hit by a shock, her creditor B makes condition to credit harder in the next period. Indeed it is well documented that lenders ask a higher external finance premium when the borrowers’ financial conditions worsen (Bernanke et al., 1999). This can be seen as a cost from the point of view of A and thus as an additional shock hitting A in the next period. In both situations, a decrease in robustness at period t increases the chance of a decrease in robustness at period t + 1.

It is the interplay of such positive feedback with the propagation of distress in a dense network which causes the overall increase in systemic risk at high connectivity.

I'm not going to wade into the detailed mathematics. Roughly speaking, the authors develop some stochastic equations to follow the evolution of a bank's "robustness" R -- considered to be a number between 0 and 1, with 1 being fully robust. A bankruptcy event is marked by R passing through 0. This is a standard approach in the finance literature on modeling corporate bankruptcies. The equations they derive incorporate their assumptions about the positive influences of risk sharing and the negative influences of distress propagation and trend reinforcement.

The key result shows up clearly in the figure (below), which shows the overall probability of a bank in the network to go bankrupt (a probability per unit of time) versus the amount of risk-sharing connectivity in the network (here given by k, the number of partners with which each bank shares risks). It may not be easy to see, but the figure shows a dashed line (labeled 'baseline') which reflects the classical result on risk sharing in the absence of trend reinforcement. More connectivity is always good. But the red curve shows the more realistic result with trend reinforcement or the positive feedback associated with financial distress taken into account. Now adding connectivity is only good for a while, and eventually becomes positively harmful. There's a middle range of optimal connectivity beyond which more connections only serve to put bank in greater danger.

Finally, the authors of this paper make very interesting observations about the potential relevance of this model to globalization, which has been an experiment in risk sharing on a global scale, with an outcome -- at the moment -- which appears not entirely positive:

In a broader perspective, this conceptual framework may have far reaching implications also for the assessment of the costs and benefits of globalization. Since some credit relations involve agents located in different countries, national credit networks are connected in a world wide web of credit relationships. The increasing interlinkage of credit networks – one of the main features of globalization – allows for international risk sharing but it also makes room for the propagation of financial distress across borders. The recent, and still ongoing, financial crisis is a case in point.

International risk sharing may prevail in the early stage of globalization, i.e. when connectivity is relatively ”low”. An increase in connectivity at this stage therefore may be beneficial. On the other hand, if connectivity is already high, i.e. in the mature stage of globalization, an increase in connectivity may bring to the fore the internationalization of financial distress. An increase in connectivity, in other words, may increase the likelihood of financial crises worldwide.

Which is, in part, why we're not yet out of the European debt crisis woods.

Friday, October 28, 2011

If you haven't already heard about this new study on the network of corporate control, do have a look. The idea behind it was to use network analysis of who owns whom in the corporate world (established through stock ownership) to tease out centrality of control. New Scientist magazine offers a nice account, which starts as follows:

The study's assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.

The idea that a few bankers control a large chunk of the global economy might not seem like news to New York's Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world's transnational corporations (TNCs).

But also have a look at the web site of the project behind the study, the European project Forecasting Financial Crises, where the authors have tried to clear up several common misinterpretations of just what the study shows.

Indeed, I know the members of this group quite well. They're great scientists and this is a beautiful piece of work. If you know a little about natural complex networks, then the structures found here actually aren't terrifically surprising. However, they are interesting, and it's very important to have the structure documented in detail. Moreover, just because the structure observed here is very common in real world complex networks doesn't mean its something that is good for society.

An excellent if brief article at Salon.com gives some useful historical context to the current animosity toward bankers -- it's nothing new. Several interesting quotes from key figures in the past:

“Behind the ostensible government sits enthroned an invisible government owing no allegiance and acknowledging no responsibility to the people. To destroy this invisible government, to befoul this unholy alliance between corrupt business and corrupt politics is the ﬁrst task of statesmanship.”

Theodore Roosevelt, 1912

“We have in this country one of the most corrupt institutions the world has ever known. I refer to the Federal Reserve Board and the Federal Reserve Banks. The Federal Reserve Board, a Government board, has cheated the Government of the United States and the people of the United States out of enough money to pay the national debt. The depredations and the iniquities of the Federal Reserve Board and the Federal Reserve banks acting together have cost this country enough money to pay the national debt several times over…

“Some people think the Federal Reserve Banks are United States Government institutions. They are not Government institutions. They are private credit monopolies, which prey upon the people of the United States for the benefit of themselves and their foreign customers, foreign and domestic speculator sand swindlers, and rich and predatory money lenders.”

Louis McFadden, chairman of the House Committee on Banking and Currency, 1932

I should have known this, but didn't -- the Federal Reserve Banks are not United States Government institutions. They are indeed owned by the private banks themselves, even though the Fed has control over taxpayer funds.This seems dubious in the extreme to me, although I'm sure there are many arguments to consider. Memory recalls reading arguments about the required independence of the central bank, but independence is of course not the same as "control by the private banks." Maybe we need to change the governance of the Fed and install some oversight with real power from a non-banking non-governmental element.

And my favourite:

“Banks are an almost irresistible attraction for that element of our society which seeks unearned money.”

FBI head J. Edgar Hoover, 1955.

In recent years, the attraction has been very strong indeed.

This is why knowing history is so important. Many battles have been fought before.

Thursday, October 27, 2011

Don't miss this post by Matt Taibbi on the Occupy Wall St. movement and its roots as an anti-corruption movement:

People aren't jealous and they don’t want privileges. They just want a level playing field, and they want Wall Street to give up its cheat codes, things like:

FREE MONEY. Ordinary people have to borrow their money at market rates. Lloyd Blankfein and Jamie Dimon get billions of dollars for free, from the Federal Reserve. They borrow at zero and lend the same money back to the government at two or three percent, a valuable public service otherwise known as "standing in the middle and taking a gigantic cut when the government decides to lend money to itself."

Or the banks borrow billions at zero and lend mortgages to us at four percent, or credit cards at twenty or twenty-five percent. This is essentially an official government license to be rich, handed out at the expense of prudent ordinary citizens, who now no longer receive much interest on their CDs or other saved income. It is virtually impossible to not make money in banking when you have unlimited access to free money, especially when the government keeps buying its own cash back from you at market rates.

Your average chimpanzee couldn't fuck up that business plan, which makes it all the more incredible that most of the too-big-to-fail banks are nonetheless still functionally insolvent, and dependent upon bailouts and phony accounting to stay above water. Where do the protesters go to sign up for their interest-free billion-dollar loans?

CREDIT AMNESTY. If you or I miss a $7 payment on a Gap card or, heaven forbid, a mortgage payment, you can forget about the great computer in the sky ever overlooking your mistake. But serial financial fuckups like Citigroup and Bank of America overextended themselves by the hundreds of billions and pumped trillions of dollars of deadly leverage into the system -- and got rewarded with things like the Temporary Liquidity Guarantee Program, an FDIC plan that allowed irresponsible banks to borrow against the government's credit rating.

This is equivalent to a trust fund teenager who trashes six consecutive off-campus apartments and gets rewarded by having Daddy co-sign his next lease. The banks needed programs like TLGP because without them, the market rightly would have started charging more to lend to these idiots. Apparently, though, we can’t trust the free market when it comes to Bank of America, Goldman, Sachs, Citigroup, etc.

In a larger sense, the TBTF banks all have the implicit guarantee of the federal government, so investors know it's relatively safe to lend to them -- which means it's now cheaper for them to borrow money than it is for, say, a responsible regional bank that didn't jack its debt-to-equity levels above 35-1 before the crash and didn't dabble in toxic mortgages. In other words, the TBTF banks got better credit for being less responsible. Click on freecreditscore.com to see if you got the same deal.

STUPIDITY INSURANCE. Defenders of the banks like to talk a lot about how we shouldn't feel sorry for people who've been foreclosed upon, because it's they're own fault for borrowing more than they can pay back, buying more house than they can afford, etc. And critics of OWS have assailed protesters for complaining about things like foreclosure by claiming these folks want “something for nothing.”

This is ironic because, as one of the Rolling Stone editors put it last week, “something for nothing is Wall Street’s official policy." In fact, getting bailed out for bad investment decisions has been de rigeur on Wall Street not just since 2008, but for decades.

Time after time, when big banks screw up and make irresponsible bets that blow up in their faces, they've scored bailouts. It doesn't matter whether it was the Mexican currency bailout of 1994 (when the state bailed out speculators who gambled on the peso) or the IMF/World Bank bailout of Russia in 1998 (a bailout of speculators in the "emerging markets") or the Long-Term Capital Management Bailout of the same year (in which the rescue of investors in a harebrained hedge-fund trading scheme was deemed a matter of international urgency by the Federal Reserve), Wall Street has long grown accustomed to getting bailed out for its mistakes.

The 2008 crash, of course, birthed a whole generation of new bailout schemes. Banks placed billions in bets with AIG and should have lost their shirts when the firm went under -- AIG went under, after all, in large part because of all the huge mortgage bets the banks laid with the firm -- but instead got the state to pony up $180 billion or so to rescue the banks from their own bad decisions.

This sort of thing seems to happen every time the banks do something dumb with their money...

I have little time to post this week as I have to meet several writing deadlines, but I wanted to briefly mention this wonderful and extremely insightful speech by Adair Turner from last year (there's a link to the video of the speech here). Turner offers so many valuable perspectives that the speech is worth reading and re-reading; here are a few short highlights that caught my attention.

First, Turner mentions that the conventional wisdom about the wonderful self-regulating efficiency of markets is really a caricature of the real economic theory of markets, which notes many possible shortcomings (asymmetric information, incomplete markets, etc.). However, he also notes that this conventional wisdom is still what has been most influential in policy circles:

.. why, we might ask, do we need new economic thinking when old economic thinking has been so varied and fertile? ... Well, we need it because the fact remains that while academic economics included many strains, in the translation of ideas into ideology, and ideology into policy and business practice, it was one oversimplified strain which dominated in the pre-crisis years.

What was that "oversimplified strain"? Turner summarizes it as follows:

For over half a century the dominant strain of academic economics has been concerned with exploring, through complex mathematics, how economically rational human beings interact in markets. And the conclusions reached have appeared optimistic, indeed at times panglossian. Kenneth Arrow and Gerard Debreu illustrated that a competitive market economy with a fully complete set of markets was Pareto efficient. New classical macroeconomists such as Robert Lucas illustrated that if human beings are not only rational in their preferences and choices but also in their expectations, then the macro economy will have a strong tendency towards equilibrium, with sustained involuntary unemployment a non-problem. And tests of the efficient market hypothesis appeared to illustrate that liquid financial markets are not driven by the patterns of chartist fantasy, but by the efficient processing of all available information, making the actual price of a security a good estimate of its intrinsic value.

As a result, a set of policy prescriptions appeared to follow:

· Macroeconomic policy – fiscal and monetary – was best left to simple, constant and clearly communicated rules, with no role for discretionary stabilisation.

· Deregulation was in general beneficial because it completed more markets and created better incentives.

· Financial innovation was beneficial because it completed more markets, and speculative trading was beneficial because it ensured efficient price discovery, offsetting any temporary divergences from rational equilibrium values.

· And complex and active financial markets, and increased financial intensity, not only improved efficiency but also system stability, since rationally self-interested agents would disperse risk into the hands of those best placed to absorb and manage it.

In other words, all the nuances of the economic theories showing the many limitations of markets seem to have made little progress in getting into the minds of policy makers, thwarted by ideology and the very simple story espoused by the conventional wisdom. Insidiously, the vision of efficient markets so transfixed people that it was assumed that the correct policy prescriptions must be those which would take the system closer to the theoretical ideal (even if that ideal was quite possibly a theorist's fantasy having little to do with real markets), rather than further away from it:

What the dominant conventional wisdom of policymakers therefore reflected was not a belief that the market economy was actually at an Arrow-Debreu nirvana – but the belief that the only legitimate interventions were those which sought to identify and correct the very specific market imperfections preventing the attainment of that nirvana. Transparency to reduce the costs of information gathering was essential: but recognising that information imperfections might be so deep as to be unfixable, and that some forms of trading activity might be socially useless, however transparent, was beyond the ideology...

Turner goes on to argue that the more nuanced views of markets as very fallible systems didn't have much influence mostly because of ideology and, in short, power interests on the part of Wall St., corporations and others benefiting from deregulation and similar policies. I think it is also fair to say that economists as a whole haven't done a very good job of shouting loudly that markets cannot be trusted to know best or that they will only give good outcomes in a restricted set of circumstances.Why haven't there been 10 or so books by prominent economists with titles like "markets are often over-rated"?

But perhaps the most important point he makes is that we shouldn't expect a "theory of everything" to emerge from efforts to go beyond the old conventional wisdom of market efficiency:

...one of the key messages we need to get across is that while good economics can help address specific problems and avoid specific risks, and can help us think through appropriate responses to continually changing problems, good economics is never going to provide the apparently certain, simple and complete answers which the pre-crisis conventional wisdom appeared to. But that message is itself valuable, because it will guard against the danger that in the future, as in the recent past, we sweep aside common sense worries about emerging risks with assurances that a theory proves that everything is OK.

That is indeed a very important message.

The speech goes on to touch on many other topics, all with a fresh and imaginative perspective. Abolish banks? That sounds fairly radical, but it's important to realise that things we take for granted aren't fixed in stone, and may well be the source of problems. And abolishing banks as we know them has been suggested before by prominent people:

Larry Kotlikoff indeed, echoing Irving Fisher, believes that a system of leveraged fractional reserve banks is so inherently unstable that we should abolish banks and instead extend credit to the economy via mutual loan funds, which are essentially banks with 100% equity capital requirements.8 For reasons I have set out elsewhere, I’m not convinced by that extremity of radicalism.9 ... But we do need to ensure that debates on capital and liquidity requirements address the fundamental issues rather than simply choices at the margin. And that requires
economic thinking which goes back to basics and which recognises the importance of specific evolved institutional structures (such as fractional reserve banking), rather than treating existing institutional structures either as neutral pass-throughs in economic models or as facts of life which cannot be changed.

Tuesday, October 25, 2011

From the New York Times (by way of Simon Johnson), a beautiful (and scary) picture of the various debt connections among European nations. (Best to right click and download and then open so you can easily zoom in and out as the picture is mighty big.)

My question is - what happens if the Euro does collapse? Do European nations have well-planned emergency measures to restore the Franc, Deutchmark, Lira and other European currencies quickly? Somehow I'm not feeling reassured.

Monday, October 24, 2011

This is no joke. Studies show that if you examine the genetic material of your typical banker, you'll find that only about 10% of it takes human form. The other 90% is much more slimy and has been proven to be of bacterial origin. That's 9 genes out of 10: bankers are mostly bacteria. Especially Lloyd Blankfein. This is all based on detailed state-of-the-art genetic science, as you can read in this new article in Nature.

OK, I am of course joking. The science shows that we're all like this, not only the bankers. Still, the title of this post is not false. It just leaves something out. Probably not unlike the sales documentation or presentations greasing the wheels of the infamous Goldman Sachs Abacus deals.

Saturday, October 22, 2011

It's encouraging to see that the president of the Federal Reserve Bank of Kansas City has come out arguing that "too big too fail" banks are "fundamentally inconsistent with capitalism." See the speech of Thomas Hoenig. One excerpt:

“How can one firm of relatively small global significance merit a government bailout? How can a single investment bank on Wall Street bring the world to the brink of financial collapse? How can a single insurance company require billions of dollars of public funds to stay solvent and yet continue to operate as a private institution? How can a relatively small country such as Greece hold Europe financially hostage? These are the questions for which I have found no satisfactory answers. That’s because there are none. It is not acceptable to say that these events occurred because they involved systemically important financial institutions.

Because there are no satisfactory answers to these questions, I suggest that the problem with SIFIs is they are fundamentally inconsistent with capitalism. They are inherently destabilizing to global markets and detrimental to world growth. So long as the concept of a SIFI exists, and there are institutions so powerful and considered so important that they require special support and different rules, the future of capitalism is at risk and our market economy is in peril.”

Thursday, October 20, 2011

Take a look at this on the transparency of the Federal Reserve (from Financeaddict) compared to other large nations' central banks. Then watch this, where Timothy Geithner tries very hard to slip sleazily away from any mention of the $13 Billion that went directly from AIG to politically well-connected Goldman Sachs. "Did you have conversations with the AIG counterparties?" Response -- waffle, evade, waffle, stare, mumble. After that, try to tell me that the US is not neck deep in serious political corruption.

Following my second recent post on what moves the markets, two readers posted interesting and noteworthy comments and I'd like to explore them a little. I had presented evidence in the post that many large market movements do not appear to be linked to the sudden arrival of public information in the form of news. Both comments noted that this may leave out of the picture another source of information -- private information brought into the market through the action of traders taking actions:

Anonymous said...

I don't see any mention of what might be called "trading" news, e.g. a large institutional investor or hedge fund reducing significantly its position in a given stock for reasons unrelated to the stock itself - or at least not synchronized with actual news on the underlying. The move can be linked to internal policy, or just a long-term call on the company which timing has little to do with market news, or lags them quite a bit (like an accumulation of bad news leading to a lagged reaction, for instance). These shocks are frequent even on fairly large cap stocks. They also tend to have lingering effect because the exact size of the move is never disclosed by the investor and can spread over long periods of time (i.e. days), which would explain the smaller beta. Yet this would be a case of "quantum correction", both in terms of timing and agent size, rather than a breakdown of the information hypothesis.

Seconding the previous comment, asset price information comes in a lot more forms than simply "news stories about company X." All market actions contains information. Every time a trade occurs there's some finite probability that it's the action of an informed trader. Every time the S&P moves its a piece of information on single stock with non-zero beta. Every time the price of related companies changes it contains new information.

Both of these comments note the possibility that every single trade taking place in the market (or at least many of them) may be revealing some fragment of private information on the part of whoever makes the trade. In principle, it might be such private information hitting the market which causes large movements (the s-jumps described in the work of Joulin and colleagues).

I think there are several things to note in this regard. The first is that, while this is a sensible and plausible idea, it shouldn't be stretched too far. Obviously, if you simply assume that all trades carry information about fundamentals, then the EMH -- interpreted in the sense that "prices move in response to new information about fundamentals" -- essentially becomes true by definition. After all, everyone agrees that trading drives markets. If all trading is assumed to reveal information, then we've simple assumed the truth of the EMH. It's a tautology.

More useful is to treat the idea as a hypothesis requiring further examination. Certainly some trades do reveal private information, as when a hedge fund suddenly buys X and sells Y, reflecting a belief based on research that Y is temporarily overvalued relative to X. Equally, some trades (as mentioned in the first comment) may reveal no information, simply being carried out for reasons having nothing to do with the value of the underlying stock. As there's no independent way -- that I know of -- to determine if a trade reveals new information or not, we're stuck with a hypothesis we cannot test.

But some research has tried to examine the matter from another angle. Again, consider large price movements -- those in the fat-tailed end of the return distribution. One proposed idea looking to private information as a cause proposes that large price movements are caused primarily by large-volume trades by big players such as hedge funds, mutual funds and the like. Some such trades might reveal new information, and some might not, but let's assume for now that most do. In a paper in Nature in 2003, Xavier Gabaix and colleagues argued that you can explain the precise form of the power law tail for the distribution of market returns -- it has an exponent very close to 3 -- from data showing that the size distribution of mutual funds follows a similar power law with an exponent of 1.05. A key assumption in their analysis is that the price impact Δp generated by a trade of volume V is roughly equal to Δp = kV1/2.

This point of view seems to support the idea that the arrival of new private information, expressed in large trades, might account for the no-news s jumps noted in the Jouvin study. (It seems less plausible that such revealed information might account for anything as violent as the 1987 crash, or the general meltdown of 2008). But taken at face value, these arguments at least seem to be consistent with the EMH view that even many large market movements reflect changes in fundamentals. But again, this assumes that all or at least most large volume trades are driven by private information on fundamentals, which may not be the case. The authors of this study themselves don't make any claim about whether large volume trades really reflect fundamental information. Rather, they note that...

Such a theory where large individual participants move the market is consistent with the evidence that stock market movements are difficult to explain with changes in fundamental values...

But more recent research (here and here, for example) suggest that this explanation doesn't quite hang together because the assumed relationship between large returns and large volume trades isn't correct. This analysis is fairly technical, but is based on the study of minute-by-minute NASDAQ trading and shows that, if you consider only extreme returns or extreme volumes, there is no general correlation between returns and volumes. The correlation assumed in the earlier study may be roughly correct on average, but it not true for extreme events. "Large jumps," the authors conclude, "are not induced by large trading volumes."

Indeed, as the authors of these latter studies point out, people who have valuable private information don't want it to be revealed immediately in one large lump because of the adverse market impact this entails (forcing prices to move against them). A well-known paper by Albert Kyle from 1985 showed how an informed trader with valuable private information, trading optimally, can hide his or her trading in the background of noisy, uninformed trading, supposing it exists. That may be rather too much to believe in practice, but large trades do routinely get broken up and executed as many small trades precisely to minimize impact.

All in all, then, it seems we're left with the conclusion that public or private news does account for some large price movements, but cannot plausibly account for all of them. There are other factors. The important thing, again, is to consider what this means for the most meaningful sense of the EMH, which I take to be the view that market prices reflect fundamental values fairly accurately (because they have absorbed all relevant information and processed it correctly). The evidence suggests that prices often move quite dramatically on the basis of no new information, and that prices may be driven as a result quite far from fundamental values.

The latter papers do propose another mechanism as the driver of routine large market movements. This is a more mechanical process centering on the natural dynamics of orders in the order book. I'll explore this in detail some other time. For now, just a taster from this paper, which describes the key idea:

So what is left to explain the seemingly spontaneous large price jumps? We believe that the explanation comes from the fact that markets, even when they are ‘liquid’, operate in a regime of vanishing liquidity, and therefore are in a self-organized critical state [31]. On electronic markets, the total volume available in the order book is, at any instant of time, a tiny fraction of the stock capitalisation, say 10−5 −10−4 (see e.g. [15]). Liquidity providers take the risk of being “picked off”, i.e. selling just before a big upwards move or vice versa, and therefore place limit orders quite cautiously, and tend to cancel these orders as soon as uncertainty signals appear. Such signals may simply be due to natural fluctuations in the order flow, which may lead, in some cases, to a catastrophic decay in liquidity, and therefore price jumps. There is indeed evidence that large price jumps are due to local liquidity dry outs.

Tuesday, October 18, 2011

I promise very soon to stop beating on the dead carcass of the efficient markets hypothesis (EMH). It's a generally discredited and ill-defined idea which has done a great deal, in my opinion, to prevent clear thinking in finance. But I happened recently on a defense of the EMH by a prominent finance theorist that is simply a wonder to behold -- its logic a true empirical testament to the powers of human rationalization. It also illustrates the borderline Orwellian techniques to which diehard EMH-ers will resort to cling to their favourite idea.

The paper was written in 2000 by Mark Rubinstein, a finance professor at University of California, Berkeley, and is entitled "Rational Markets: Yes or No. The Affirmative Case." It is Rubinstein's attempt to explain away all the evidence against the EMH, from excess volatility to anomalous predictable patterns in price movements and the existence of massive crashes such as the crash of 1987. I'm not going to get into too much detail, but will limit myself to three rather remarkable arguments put forth in the paper. They reveal, it seems to me, the mind of the true believer at work:

1. Rubinstein asserts that his thinking follows from what he calls The Prime Directive. This commitment is itself interesting:

When I went to financial economist training school, I was taught The Prime Directive. That is, as a trained financial economist, with the special knowledge about financial markets and statistics that I had learned, enhanced with the new high-tech computers, databases and software, I would have to be careful how I used this power. Whatever else I would do, I should follow The Prime Directive:

Explain asset prices by rational models. Only if all attempts fail, resort to irrational investor behavior.

One has the feeling from the burgeoning behavioralist literature that it has lost all the constraints of this directive – that whatever anomalies are discovered, illusory or not, behavioralists will come up with an explanation grounded in systematic irrational investor behavior.

Rubinstein here is at least being very honest. He's going to jump through intellectual hoops to preserve his prior belief that people are rational, even though (as he readily admits elsewhere in the text) we know that people are not rational. Hence, he's going to approach reality by assuming something that is definitely not true and seeing what its consequences are. Only if all his effort and imagination fails to come up with a suitable scheme will he actually consider paying attention to the messy details of real human behaviour.

What's amazing is that, having made this admission, he then goes on to criticize behavioural economists for having found out that human behaviour is indeed messy and complicated:

The behavioral cure may be worse than the disease. Here is a litany of cures drawn from the burgeoning and clearly undisciplined and unparsimonious behavioral literature:

Reference points and loss aversion (not necessarily inconsistent with rationality):
Endowment effect: what you start with matters
Status quo bias: more to lose than to gain by departing from current situation
House money effect: nouveau riche are not very risk averse

Overconfidence:
Overconfidence about the precision of private information
Biased self-attribution (perhaps leading to overconfidence)
Illusion of knowledge: overconfidence arising from being given partial information
Disposition effect: want to hold losers but sell winners
Illusion of control: unfounded belief of being able to influence events

Statistical errors:
Gambler’s fallacy: need to see patterns when in fact there are none
Very rare events assigned probabilities much too high or too low
Ellsberg Paradox: perceiving differences between risk and uncertainty
Extrapolation bias: failure to correct for regression to the mean and sample size
Excessive weight given to personal or antidotal experiences over large sample statistics
Overreaction: excessive weight placed on recent over historical evidence
Failure to adjust probabilities for hindsight and selection bias

Miscellaneous errors in reasoning:Violations of basic Savage axioms: sure-thing principle, dominance, transitivity
Sunk costs influence decisions
Preferences not independent of elicitation methods
Compartmentalization and mental accounting
“Magical” thinking: believing you can influence the outcome when you can’t
Dynamic inconsistency: negative discount rates, “debt aversion”
Tendency to gamble and take on unnecessary risks
Overpricing long-shots
Selective attention and herding (as evidenced by fads and fashions)
Poor self-control
Selective recall
Anchoring and framing biases
Cognitive dissonance and minimizing regret (“confirmation trap”)
Disjunction effect: wait for information even if not important to decision
Time-diversification
Tendency of experts to overweight the results of models and theories
Conjunction fallacy: probability of two co-occurring more probable than a single one

Many of these errors in human reasoning are no doubt systematic across individuals and time, just as behavioralists argue. But, for many reasons, as I shall argue, they are unlikely to aggregate up to affect market prices. It is too soon to fall back to what should be the last line of defense, market irrationality, to explain asset prices. With patience, the anomalies that appear puzzling today will either be shown to be empirical illusions or explained by further model generalization in the context of rationality.

Now, there's sense in the idea that, for various reasons, individual behavioural patterns might not be reflected at the aggregate level. Rubinstein's further arguments on this point aren't very convincing, but at least it's a fair argument. What I find more remarkable is the a priori decision that an explanation based on rational behaviour is taken to be inherently superior to any other kind of explanation, even though we know that people are not empirically rational. Surely an explanation based on a realistic view of human behaviour is more convincing and more likely to be correct than one based on unrealistic assumptions (Milton Friedman's fantasies notwithstanding). Even if you could somehow show that market outcomes are what you would expect if people acted as if they were rational (a dubious proposition), I fail to see why that would be superior to an explanation which assumes that people act as if they were real human beings with realistic behavioural quirks, which they are.

But that's not how Rubinstein sees it. Explanations based on a commitment to taking real human behaviour into account, in his view, have "too much of a flavor of being concocted to explain ex-post observations – much like the medievalists used to suppose there were a different angel providing the motive power for each planet." The people making a commitment to realism in their theories, in other words, are like the medievalists adding epicycles to epicycles. The comparison would seem more plausibly applied to Rubinstein's own rational approach.

2. Rubinstein also relies on the wisdom of crowds idea, but doesn't at all consider the many paths by which a crowd's average assessment of something can go very much awry because individuals are often strongly influenced in their decisions and views by what they see others doing. We've known this going all the way back to the famous 1950s experiments of Solomon Asch on group conformity. Rubinstein pays no attention to that, and simply asserts that we can trust that the market will aggregate information effectively and get at the truth, because this is what group behaviour does in lots of cases:

The securities market is not the only example for which the aggregation of information across different individuals leads to the truth. At 3:15 p.m. on May 27, 1968, the submarine USS Scorpion was officially declared missing with all 99 men aboard. She was somewhere within a 20-mile-wide circle in the Atlantic, far below implosion depth. Five months later, after extensive search efforts, her location within that circle was still undetermined. John Craven, the Navy’s top deep-water scientist, had all but given up. As a last gasp, he asked a group of submarine and salvage experts to bet on the probabilities of different scenarios that could have occurred. Averaging their responses, he pinpointed the exact location (within 220 yards) where the missing sub was found.

Now I don't doubt the veracity of this account or that crowds, when people make decisions independently and have no biases in their decisions, can be a source of wisdom. But it's hardly fair to cite one example where the wisdom of the crowd worked out, without acknowledging the at least equally numerous examples where crowd behaviour leads to very poor outcomes. It's highly ironic that Rubinstein wrote this paper just as the dot.com bubble was collapsing. How could the rational markets have made such mistaken valuations of Internet companies? It's clear that many people judge values at least in part by looking to see how others were valuing them, and when that happens you can forget the wisdom of the crowds.

Obviously I can't fault Rubinstein for not citing these experiments from earlier this year which illustrate just how fragile the conditions are under which crowds make collectively wise decisions, but such experiments only document more carefully what has been obvious for decades. You can't appeal to the wisdom of crowds to proclaim the wisdom of markets without also acknowledging the frequent stupidity of crowds and hence the associated stupidity of markets.

3. Just one further point. I've pointed out before that defenders of the EMH in their arguments often switch between two meanings of the idea. One is that the markets are unpredictable and hard to beat, the other is that markets do a good job of valuing assets and therefore lead to efficient resource allocations. The trick often employed is to present evidence for the first meaning -- markets are hard to predict -- and then take this in support of the second meaning, that markets do a great job valuing assets. Rubinstein follows this pattern as well, although in a slightly modified way. At the outset, he begins making various definitions of the "rational market":

I will say markets are maximally rational if all investors are rational.

This, he readily admits, isn't true:

Although most academic models in finance are based on this assumption, I don’t think financial economists really take it seriously. Indeed, they need only talk to their spouses or to their brokers.

But he then offers a weaker version:

... what is in contention is whether or not markets are simply rational, that is, asset prices are set as if all investors are rational.

In such a market, investors may not be rational, they may trade too much or fail to diversify properly, but still the market overall may reflect fairly rational behaviour:

In these cases, I would like to say that although markets are not perfectly rational, they are at least minimally rational: although prices are not set as if all investors are rational, there are still no abnormal profit opportunities for the investors that are rational.

This is the version of "rational markets" he then tries to defend throughout the paper. Note what has happened: the definition of the rational market has now been weakened to only say that markets move unpredictably and give no easy way to make a profit. This really has nothing whatsoever to do with the market being rational, and the definition would be improved if the word "rational" were removed entirely. But I suppose readers would wonder why he was bothering if he said "I'm going to defend the hypothesis that markets are very hard to predict and hard to beat" -- does anyone not believe that? Indeed, this idea of a "minimally rational" market is equally consistent with a "maximally irrational" market. If investors simply flipped coins to make their decisions, then there would also be no easy profit opportunities, as you'd have a truly random market.

Why not just say "the markets are hard to predict" hypothesis? The reason, I suspect, is that this idea isn't very surprising and, more importantly, doesn't imply anything about markets being good or accurate or efficient. And that's really what EMH people want to conclude -- leave the markets alone because they are wonderful information processors and allocate resources efficiently. Trouble is, you can't conclude that just from the fact that markets are hard to beat. Trying to do so with various redefinitions of the hypothesis is like trying to prove that 2 = 1. Watching the effort, to quote physicist John Bell in another context, "...is like watching a snake trying to eat itself from the tail. It becomes embarrassing for the spectator long before it becomes painful for the snake."

Monday, October 17, 2011

High frequency trading makes for markets that produce enormous volumes of data. Such data make it possible to test some of the old chestnuts of market theory -- the efficient markets hypothesis, in particular -- more carefully than ever before. Studies in the past few years show quite clearly, it seems to me, that the EMH is very seriously misleading and isn't really even a good first approximation.

Let me give a little more detail. In a recent post I began a somewhat leisurely exploration of considerable evidence which contradicts the efficient markets idea. As the efficient markets hypothesis (the "weak" version, at least) claims, market prices fully reflect all publicly available information. When new information becomes available, prices respond. In the absence of new information, prices should remain more or less fixed.

Striking evidence against this view comes from studies (now almost ten or twenty years old) showing that markets often make quite dramatic movements even in the absence of any news. I looked at some older studies along these lines in the last post, but stronger evidence comes from studies using electronic news feeds and high-frequency stock data. Are sudden jumps in prices in high frequency markets linked to the arrival of new information, as the EMH says? In a word -- no!

The idea in these studies is to look for big price movements which, in a sense, "stand out" from what is typical, and then see if such movements might have been caused by some "news". A good example is this study by Armand Joulin and colleagues from 2008. Here's how they proceeded. Suppose R(t) is the minute by minute return for some stock. You might take the absolute value of these returns, average them over a couple hours and use this as a crude measure -- call it σ -- of the "typical size" of one-minute stock movements over this interval. An unusually big jump over any minute-long interval will be one for which the magnitude of R is much bigger than σ.

To make this more specific, Joulin and colleagues defined "s jumps" as jumps for which the ratio |R/σ| > s. The value of s can be 2 or 10 or anything you like. You can look at the data for different values of s, and the first thing the data shows -- and this isn't surprising -- is a distinctive pattern for the probability of observing jumps of size s. It falls off with increasing s, meaning that larger jumps are less likely, and the mathematical form is very simple -- a power law with P(s) being proportional to s-4, especially as s becomes large (from 2 up to 10 and beyond). This is shown in the figure below (the upper curve):

This pattern reflects the well known "fat tailed" distribution of market returns, with large returns being much more likely than they would be if the statistics followed a Gaussian curve. Translating the numbers into daily events, s jumps of size s = 4 turn out to happen about 8 times each day, while larger jumps of s = 8 occur about once every day and one-half (this is true for each stock).

Now the question is -- are these jumps linked to the announcement of some new information? To test this idea, Joulin and colleagues looked at various news feeds including feeds from Dow Jones and Reuters covering about 900 stocks. These can be automatically scanned for mention of any specific company, and then compared to price movements for that company. The first thing they found is that, on average, a new piece of news arrives for a company about once every 3 days. Given that a stock on average experiences one jump every day and one-half, this immediately implies an imbalance between the number of stock movements and the number of news items. There's not enough news to cause the jumps observed. Stocks move -- indeed, jump -- too frequently.

Conclusion: News sometimes but not always causes market movements, and significant market movements are sometimes but not always caused by news. The EMH is wrong, unless you want to make further excuses that there could have been news that caused the movement, and we just don't recognize it or haven't yet figured out what it is. But that seems like simply positing the existence of further epicycles.

But another part of the Joulin et al. study is even more interesting. Having found a way to divide price jumps into two categories: A) those caused by news (clearly linked to some item in a news feed) and B) those unrelated to any news, it is then possible to look for any systematic differences in the way the market settled down after such a jump. The data show that the volatility of prices, just after a jump, becomes quite high; it then relaxes over time back to the average volatility before the jump. But the relaxation works differently depending on whether the jump was of type A or B: caused by news or not caused by news. The figure below shows how the volatility relaxes back to the norm first for jumps linked to news, and second to jumps not linked to news. The later shows a much slower relaxation:

As the authors comment on this figure,

In both cases, we find (Figure 5) that the relaxation of the excess-volatility follows a power-law in time σ(t) − σ(∞) ∝ t− β (see also [22, 23]). The exponent of the decay is, however, markedly different in the two cases: for news jumps, we find β ≈ 1, whereas for endogenous jumps one has β ≈ 1/2. Our results are compatible with those of [22], who find β ≈ 0.35.

Of course, β ≈ 1/2 implies a much slower relaxation back to the norm (whatever that is!) than does β ≈ 1. Hence, it seems that the market takes a longer time to get back to normal after a no-news jump, whereas it goes back to normal quite quickly after a news-related jump.

No one knows why this should be, but Joulin and colleagues made the quite sensible speculation that a jump clearly related to news is not really surprising, and certainly not unnerving. It's understandable, and traders and investors can decide what they think it means and get on with their usual business. In contrast, a no-news event -- think of the Flash Crash, for example -- is very different. It is a real shock and presents a lingering unexplained mystery. It is unnerving and makes investors uneasy. The resulting uncertainty registers in high volatility.

What I've written here only scratches the surface of this study. For example, one might object that lots of news isn't just linked to the fate of one company, but pertains to larger macroeconomic factors. It may not even mention a specific company but point to a likely rise in the prices of oil or semiconductors, changes influencing whole sectors of the economy and many stocks all at once. Joulin and colleagues tried to take this into account by looking for correlated jumps in the prices of multiple stocks, and indeed changes driven by this kind of news do show up quite frequently. But even accounting for this more broad-based kind of news, they still found that a large fraction of the price movements of individual stocks do not appear to be linked to anything coming in through news feeds. As they concluded in the paper:

Our main result is indeed that most large jumps... are not related to any broadcasted news, even if we extend the notion of ‘news’ to a (possibly endogenous) collective market or sector jump. We find that the volatility pattern around jumps and around news is quite different, confirming that these are distinct market phenomena [17]. We also provide direct evidence that large transaction volumes are not responsible for large price jumps, as also shown in [30]. We conjecture that most price jumps are in fact due to endogenous liquidity micro-crises [19], induced by order flow fluctuations in a situation close to vanishing outstanding liquidity.

Their suggestion in the final sentence is intriguing and may suggest the roots of a theory going far beyond the EMH. I've touched before on early work developing this theory, but there is much more to be said. In any event, however, data emerging from high-frequency markets backs up everything found before -- markets often make violent movements which have no link to news. Markets do not just respond to new information. Like the weather, they have a rich -- and as yet mostly unstudied -- internal dynamics.

Friday, October 14, 2011

I've posted before on macroeconomic models that try to go beyond the "rational expectations" framework by assuming that the agents in an economy are different (they have heterogeneous expectations) and are also not necessarily rational. This approach seems wholly more realistic and believable to me.

In a recent comment, however, ivansml pointed me to this very interesting paper from 2009, which I've enjoyed reading. What the paper does is explore what happens in some of the common rational expectations models if you suppose that agents' expectations aren't formed rationally but rather on the basis of some learning algorithm. The paper shows that learning algorithms of a certain kind lead to the same equilibrium outcome as the rational expectations viewpoint. This IS interesting and seems very impressive. However, I'm not sure it's as interesting as it seems at first.

The reason is that the learning algorithm is indeed of a rather special kind. Most of the models studied in the paper, if I understand correctly, suppose that agents in the market already know the right mathematical form they should use to form expectations about prices in the future. All they lack is knowledge of the values of some parameters in the equation. This is a little like assuming that people who start out trying to learn the equations for, say, electricity and magnetism, already know the right form of Maxwell's equations, with all the right space and time derivatives, though they are ignorant of the correct coefficients. The paper shows that, given this assumption in which the form of the expectations equation is already known, agents soon evolve to the correct rational expectations solution. In this sense, rational expectations emerges from adaptive behaviour.

I don't find this very convincing as it makes the problem far too easy. More plausible, it seems to me, would be to assume that people start out with not much knowledge at all of how future prices will most likely be linked by inflation to current prices, make guesses with all kinds of crazy ideas, and learn by trial and error. Given the difficulty of this problem, and the lack even among economists themselves of great predictive success, this would seem more reasonable. However, it is also likely to lead to far more complexity in the economy itself, because a broader class of expectations will lead to a broader class of dynamics for future prices. In this sense, the models in this paper assume away any kind of complexity from a diversity of views.

To be fair to the authors of the paper, they do spell out their assumptions clearly. They state in fact that they assume that people in their economy form views on likely future prices in the same way modern econometricians do (i.e. using the very same mathematical models). So the gist seems to be that in a world in which all people think like economists and use the equations of modern econometrics to form their expectations, then, even if they start out with some of the coefficients "mis-specified," their ability to learn to use the right coefficients can drive the economy to a rational expectations equilibrium. Does this tell us much?

I'd be very interested in others' reactions to this. I do not claim to know much of anything about macroeconomics. Indeed, one of the nice things about this paper is its clear introduction to some of the standard models. This in itself is quite illuminating. I hadn't realized that the standard models are not any more complex than linear first-order time difference equations (if I have this right) with some terms including expectations. I had seen these equations before and always thought they must be toy models just meant to illustrate the far more complex and detailed models used in real calculations and located in some deep economic book I haven't yet seen, but now I'm not so sure.

I just finished reading this wonderful short review of game theory (many thanks to ivansml for pointing this out to me) and its applications and limitations by Martin Shubik. It's a little old -- it appeared in the journal Complexity in 1998 -- but offers a very broad perspective which I think still holds today. Game theory in the pure sense generally views agents as coming to their strategies through rational calculation; this perspective has had huge influence in economics, especially in the context of relatively simple games with few players and not too many possible strategies. This part of game theory is well developed, although Shubik suggests there are probably many surprises left to learn.

Where the article really comes alive, however, is in considering the limitations to this strictly rational approach in games of greater complexity. In physics, the problem of two rigid bodies in gravitational interaction can be solved exactly (ignoring radiation, of course), but you get generic chaos as soon as you have three bodies or more. The same is true, Shubik argues, in game theory. Extend the number of players above three and as the number of possible permutations of strategies proliferates it is no longer plausible to assume that agents act rationally. The decision problems become too complex. One might still try to search for optimal N player solutions as a guide to what might be possible, but the rational agent approach isn't likely to be profitable as a guide to the likely behaviour and dynamics in such complex games. I highly recommend Shubik's short article to anyone interested in game theory, and especially its application to real world problems where people (or other agents) really can't hope to act on the basis of rational calculation, but instead have to use heuristics, follow hunches, and learn adaptively as they go.

Some of the points Shubik raises find perfect illustration in a recent study (I posted on it here) of typical dynamics in two-player games when the number of possible strategies gets large. Choose the structure of the games at random and the most likely outcome is a rich ongoing evolution of strategic behaviour which never settles down into any equilibrium. But these games do seem to show characteristic dynamical behaviour such as "punctuated equilibrium" -- long periods of relative quiescence which get broken apart sporadically by episodes of tumultuous change -- and clustered volatility -- the natural clustering together of periods of high variability. These qualitative aspects appear to be generic features of the non-equilibrium dynamics of complex games. Interesting that they show up generically in markets as well.

When problems are too complex -- which is typically the case -- we try to learn and adapt rather than "solving" the problem in any sense. Our learning itself may also never settle down into any stable form, but continually change as we find something that works well for a time, and then suddenly find it fails and we need to learn again.

Tuesday, October 11, 2011

In a recent post I commented on the "fetish of rationality" present in a great deal of mathematical economic theory. Agents in the theories are often assumed to have super-human reasoning abilities and to determine their behaviour and expectations solely through completely rational calculation. In comments, Relja suggested that maybe I'd gone too far and that economists version of rationality isn't all that extreme:

I think critiques like this about rationality in economics miss the point. The rationality assumed in economics is concerned with general trends; generally people pursue pleasure, not pain (according to their own utility functions), they prefer more money to less (an expanded budget constraint leaves them on a higher indifference curve, thus better off), they have consistent preferences (when they're in the mood for chocolate, they're not going to choose vanilla). Correspondingly, firms have the goal of profit maximization - they produce products that somebody will want to buy or they go out of business. Taking the rationality assumption to its "umpteenth" iteration is really quite irrational in itself. A consumer knows that spending 6 years to calculate the mathematically optimal choice of ice-cream is irrational. An economist accordingly knows the same thing. And although assumptions are required for modelling economic scenarios (micro or macro), I seriously doubt that any serious economist would make assumptions that infer such irrationality. :).

I think Relja expressed a well-balanced perspective, has learned some economics in detail, and has taken away from it some conclusions that are, all in all, pretty sound. Indeed, people are goal oriented, don't (usually) prefer pain, and businesses do try to make profits (although whether they try to 'maximize' is an open question). If economists were really just following these reasonable ideas, I would have no problem.

But I also think the problem is worse than Relja may realize. The use of rationality assumptions is more extreme than this, and also decisive in some of the most important areas of economic theory, especially in macroeconomics. A few days ago, John Kay offered this very long and critical essay on the form of modern economic theory. It's worth a read all the way through, but in essence, Kay argues that economics is excessively based on logical deduction of theories from a set of axioms, one of which (usually) is the complete rationality of economic agents:

Rigour and consistency are the two most powerful words in economics today.... They have undeniable virtues, but for economists they have particular interpretations. Consistency means that any statement about the world must be made in the light of a comprehensive descriptive theory of the world. Rigour means that the only valid claims are logical deductions from specified assumptions. Consistency is therefore an invitation to ideology, rigour an invitation to mathematics. This curious combination of ideology and mathematics is the hallmark of what is often called ‘freshwater economics’ – the name reflecting the proximity of Chicago, and other centres such as Minneapolis and Rochester, to the Great Lakes.

Consistency and rigour are features of a deductive approach, which draws conclusions from a group of axioms – and whose empirical relevance depends entirely on the universal validity of the axioms.

Kay isn't quite as explicit as he might have been, but economist Michael Woodford, in a comment on Kay's argument, goes further in spelling out what Key finds most objectionable -- the so-called rational expectations framework, originally proposed by Robert Lucas, which forms the foundations of today's DGSE (dynamic stochastic equilibrium models). A core assumption of such models is that all individuals in the economy have rational expectations about the future, and that such expectations affect their current behaviour.

Now, if this meant something like Relja's comment suggests it might -- that people are simply forward looking, as we know they are -- this would be fine. But it's not. The form this assumption ultimately takes in these models is to assume that everyone in the economy has fully rational expectations, in that they form their expectations in accordance with the conceivably best and most accurate economic models, even if solving those models might require considerable mathematics and computation (and knowledge of everyones' expectations). As Woodford puts it in his comment,

It has been standard for at least the past three decades to use models in which not only does the model give a complete description of a hypothetical world, and not only is this description one in which outcomes follow from rational behavior on the part of the decision makers in the model, but the decision makers in the model are assumed to understand the world in exactly the way it is represented in the model. More precisely, in making predictions about the consequences of their actions (a necessary component of an accounting for their behavior in terms of rational choice), they are assumed to make exactly the predictions that the model implies are correct (conditional on the information available to them in their personal situation).

This postulate of “rational expectations,” as it is commonly though rather misleadingly known, is the crucial theoretical assumption behind such doctrines as “efficient markets” in asset pricing theory and “Ricardian equivalence” in macroeconomics.

It is precisely here that modern economics takes the assumption of rationality much too far merely for the sake of mathematical and theoretical rigour. Do economists really believe people form their expectations in this way? It's hard to imagine they could as the live the rest of their lives with people who do not do this. But the important question isn't what they really believe but on what do they base their theories which then get used by governments in policy making? Sadly, these unrealistic assumptions remain in the key models. But these assumptions really have zero plausibility. Woodford again,

[The rational expectations assumption] is often presented as if it were a simple consequence of an aspiration to internal consistency in one’s model and/or explanation of people’s choices in terms of individual rationality, but in fact it is not a necessary implication of these methodological commitments. It does not follow from the fact that one believes in the validity of one’s own model and that one believes that people can be assumed to make rational choices that they must be assumed to make the choices that would be seen to be correct by someone who (like the economist) believes in the validity of the predictions of that model. Still less would it follow, if the economist herself accepts the necessity of entertaining the possibility of a variety of possible models, that the only models that she should consider are ones -- in each of which everyone in the economy is assumed to understand the correctness of that particular model, -- rather than entertaining beliefs that might (for example) be consistent with one of the other models in the set that she herself regards as possibly correct.

This is the sense in which hyper-rationality really does enter into economic theories. It's still pervasive, and still indefensible. It would be infinitely preferable if macro-economists such as Lucas and his followers (one of whom, Thomas Sargent, was perversely and outrageously just awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel).

**UPDATE**

Blogger sometimes doesn't seem to register comments. Email alerted me to a sharp criticism by ivansml of some of the points I made, but the comment isn't, at least for my browser, yet showing up. Just so it doesn't get lost, ivansml said:

Every assumption is false when understood literally, including rational expectations. The important thing is whether people behave as if they had rational expectations - and answer to that will fortunately depend on particular model and data, not on emotional arguments and expressive vocabulary.

By the way, if you reject RE but accept that expectations matter and should be forward-looking, how do you actually propose to model them? One possible alternative is to have agents who estimate laws of motion from past data and continously update their estimates, which is something that macroeconomists have actually investigated before. And guess what - this process will often converge to rational expectations equilibrium.

Finally, the comment about Nobel Prize (yeah, it's not real Nobel, whatever) for Sargent is a sign of ignorance. Sargent has published a lot on generalizations or relaxations of RE, including the learning literature mentioned above, literature on robustness (where agents distrust their model and choose actions which are robust to model misspecifications) and even agent-based models. In addition to that, the prize citation focuses on his empirical contributions (i.e. testing theories against data). This does not seem like someone who is religiously devoted to "hyper-rationality" and ideology.

To points in response:

1. Yes, the point is precisely to include expectations but to model their formation in some more behaviourally realistic way, through learning algorithms as suggested. I am aware of such work and think it is very important. Indeed, the latter portion of this post from earlier this year looked precisely at this and considered a recent review of work in this area by Cars Hommes and others. The idea is not to assume that everyone forms their expectations identically, that learning is important, that their may be systematic biases and so on. As ivansml notes, there are circumstances in which the model may settle into a rational expectations equilibrium. But there are also many in which it does not. My hunch -- not backed by any evidence that I can point to readily -- is that the rational expectations equilibrium will be increasingly unlikely as the decisions faced by agents in the model become increasingly complex. Very possibly the system won't settle into any equilibrium at all.

But I think ivansml for pointing this out. It is certainly the case that expectations matter, and these should be brought into theory in some plausible and convincing way. Just to finish on this point, this is a quote from the Hommes review article, suggesting that the RE equilibrium doesn't come up very often:

Learning to forecast experiments are tailor-made to test the expectations hypothesis, with all other model assumptions computerized and under control of the experimenter. Different types of aggregate behavior have been observed in different market settings. To our best knowledge, no homogeneous expectations model [rational or irrational] fits the experimental data across different market settings. Quick convergence to the RE-benchmark only occurs in stable (i.e. stable under naive expectations) cobweb markets with negative expectations feedback, as in Muth's (1961) seminal rational expectations paper. In all other market settings persistent deviations from the RE fundamental benchmark seem to be the rule rather than the exception.

2. On his second point about Thomas Sargent, I plead guilty. ivansml is right -- his work is not as one dimensional as my comments made it seem. Indeed, I had been looking into his work over the past weekend for different reasons and had noticed that his work has been fairly wide ranging, and he does deserve credit for trying to relax RE assumptions. (Although he did seem a little snide in one interview I read, suggesting that mainstream macro-economists were not at all surprised by the recent financial crisis.)

Monday, October 10, 2011

Vance Packard, American journalist of some 50 years ago, quoted in Satyajit Das' book Extreme Money:

A toothbrush does little but clean teeth. Alcohol is important mostly for making people more or less drunk. An automobile can take one reliably to a destination and back... There being so little to be said, much must be invented. Social distinction must be associated with a house... sexual fulfillment with a particular... automobile, social acceptance with... a mouthwash, etc. We live surrounded by a systematic appeal to a dream world which all mature, scientific reality would reject. We, quite literally, advertise our commitment to immaturity, mendacity and profound gullibility. It is the hallmark of our culture.

Friday, October 7, 2011

It's a key assertion of the Efficient Markets Hypothesis that markets move because of news and information. When new information becomes available, investors quickly respond by buying and selling to register their views on the implications of that information. This is obviously partially true -- new information, or what seems like information, does impact markets.

Yesterday, for example, US Treasury Secretary Timothy Geithner said publicly that, despite the ominous economic and financial climate, there is “absolutely” no chance that another United States financial institution will fail. (At least that's what the New York Times says he said; they don't give a link to the speech.) That was around 10 a.m. Pretty much immediately (see the figure below) the value of Morgan Stanley stock jumped upwards by about 4% as, presumably, investors piled into this stock, now believing that the government would step in the prevent any possible Morgan Stanley collapse in the near future. A clear case of information driving the market:

Of course, this just one example and one can find further examples, hundreds every day. Information moves markets. Academics in finance have made careers by documenting this fact in so-called "event studies" -- looking at the consequences for stock prices of mergers, for example.

But I'm not sure how widely it is appreciated that the Efficient Markets Hypothesis doesn't only say that information moves markets. It also requires that markets should ONLY move when new information becomes available. If rational investors have already taken all available information into account and settled on their portfolios, then there's no reason to change in the absence of new information. Is this true? The evidence -- and there is quite a lot of it -- suggests very strongly that it is not. Markets move all the time, and sometimes quite violently, even in the total absence of any new information.

This is important because it suggests that markets have rich internal dynamics -- they move on their own without any need for external shocks. Theories which have been developed to model such dynamics give markets with realistic statistical fluctuations, including abrupt rallies or crashes. I'm going to explore some of these models in detail at some point, but I wanted first to explore a little of evidence which really does nail the case against the EMH as an adequate picture of markets "in efficient equilibrium."

Anyone watching markets might guess that they fluctuate rather more strongly than any news or information could possibly explain. But research has made this case in quantitative terms as well, beginning with a famous paper by Robert Shiller back in 1981. If you believe the efficient markets idea, then the value of a stock ought to remain roughly equal to the present value of all the future dividends a stock owner can anticipate getting from it. Or, a little more technically, real stock prices should, in Shiller's words, "equal the present value of rationally expected or optimally forecasted future real dividends discounted by a constant real discount rate." Data he studied suggest this isn't close to being true.

For example, the two figures below from his paper plot the real price P of the S&P Index (with the upward growth trend removed) and of the Dow Jones Index versus the actual discounted value P* of dividends those stocks later paid out. The solid lines for the real prices bounce up and down quite wildly while the "rational" prices based on dividends stay fairly smooth (dividends don't fluctuate so strongly, and calculating P* involves taking a moving average over many years, smoothing fluctuations even further).

These figures show what has come to be known as "excess volatility" -- excess movement in markets over and above what you should expect on the basis of markets moving on information alone.

Further evidence that it's more than information driving markets comes from studies specifically looking for correlations between new events and market movements. On Monday, October 19, 1987, the Dow Jones Industrial Average fell by more than 22% in one day. Given a conspicuous lack of any major news on that day, economists David Cutler, James Poterba and Larry Summers (yes, that Larry Summers) were moved soon after to wonder if this was a one-off weird event or if violent movements in the absence of any plausible news might have been common in history. They found that they are. Their study from 1989 looked at news and price movements in a variety of ways, but the most interesting results concern news on the days of the 50 largest singe day movements since the Second World War. A section of their table below shows the date of the event, how much the market moved, and the principle reasons given in the press for why it moved so much:

Within this list, you find some events that seem to fit the EMH idea of information as the driving force. The market fell 6.62 percent on the day in 1955 on which Eisenhower had a heart attack. The outbreak of the Korean War knocked 5.38% off the market. But for many of the events the press struggled mightily to find any plausible causal news. When markets fell 6.73% on September 3, 1946, the press even admitted that there was "No basic reason for the assault on prices."

[Curiously, I seem to have found what looks like a tiny error in this table. It lists the outbreak of the Korean War (25 June, 1950) as explaining the big movement one day later on June 26, 1950. But then it lists "Korean war continues" as an explanation for a movement on June 19, 1950, five days before the war even started!]

Cutler and colleagues ultimately concluded that the arrival of news or information could only explain about one half of the actual observed variation in stock prices. In other words, the EMH leaves out of the picture something which is roughly of equal importance as investors' response to new information.

More recently in 2000, economist Ray Fair of Yale University undertook a similar study which found quite similar conclusions. His abstract explains what he found quite succinctly:

Tick data on the S&P 500 futures contract and newswire searches are used to match events to large five minute stock price changes. 58 events that led to large stock price changes are identified between 1982 and 1999, 41 of which are directly or indirectly related to monetary policy. Many large five minute stock price changes have no events associated with them.

All in all, not a lot of evidence supporting the EMH view on the exclusive role of information in driving markets. Admittedly, these studies all have a semi-qualitative character based on history, linear regressions and other fairly crude techniques. Still, they make a fairly convincing case.

In the past few years, some physicists have taken this all a bit further using modern news feeds. More on that in the second part of this post. The conclusion doesn't change, however -- the markets appear to have a rich world of internal dynamics even in the absence of any new information arriving from outside.

Wednesday, October 5, 2011

An Op-Ed I wrote for Bloomberg on high-frequency trading finally appeared today after a short delay due to a Bloomberg backlog. I meant for there to be a link in the article to further discussion here of some of the details of Andrew Haldane's argument, but that link is not yet in place at Bloomberg because of a mix up. It should be fixed later today. Meanwhile, for anyone visiting here from Bloomberg the further discussion I've given on Haldane's work can be found here.

Tuesday, October 4, 2011

Economic theory relies very heavily on the notion of equilibrium. This is true in any model for competitive equilibrium -- exploring how exchange can in principle lead to an optimal allocation of resources -- or more generally in the context of game theory, which explores stable Nash equilibria in strategic games.

One thing physicists find wholly unsatisfying about equilibrium in either case is economists' near total neglect of the crucial problem of whether the agents in such models might ever plausibly find an equilibrium. You can assume perfectly rational agents and prove the existence of an equilibrium, but this may be an irrelevant mathematical exercise. Realistic agents with finite reasoning powers might never be able to learn their way to such a solution.

More likely, at least in many cases, is that less-than-perfectly rational agents, even if they're quite clever at learning, may never find their way to a neat Nash equilibrium solution, but instead go on changing and adapting and responding to one another in a way that leads to ongoing chaos. Naively, this would seem especially likely in any situation -- think financial markets, or any economy as a whole -- in which the number of possible strategies is enormous and it is simply impossible to "solve the problem" of what to do through perfect rational reflection (no one plays chess by working out the Nash equilibrium).

A brilliant illustration of this insight comes in a new paper by Tobias Galla and Doyne Farmer. This is the first study I've seen (though there may well be others) which addresses this matter of the relevance of equilibrium in complex, high-dimensional games in a generic way. The conclusion is as important as it is intuitively reasonable:

Here we show that if the players use a standard approach to learning, for complicated games there is a large parameter regime in which one should expect complex dynamics. By this we mean that the players never converge to a fixed strategy. Instead their strategies continually vary as each player responds to past conditions and attempts to do better than the other players. The trajectories in the strategy space display high-dimensional chaos, suggesting that for most intents and purposes the behavior is essentially random, and the future evolution is inherently unpredictable.

In other words, in games of sufficient complexity, the insights coming from equilibrium analyses just don't tell you much. If the agents learn in a plausible way, they never find any equilibrium at all, and the evolution of strategic behaviours simply carries on indefinitely. The system remains out of equilibrium.

A little more detail. Their basic approach is to consider general two player games between, say, Alice and Bob. Let each of the two players have N possible strategies to choose from. The payoff matrices for any such game are NxN matrice (one for each player) giving the payoffs they get for each pair of strategies being played. The cute idea in this analysis is to choose the game randomly by selecting the elements of the payoff matrices for both Alice and Bob from a normal distribution centered on zero. The authors simply choose a game and simulate play as the two players learn through experience -- playing strategies from their repertoire of N possibilities more frequently if those strategies give good results.

With N = 50, the results show clearly that many games do not ever settle into any kind of stable behaviour. Rather, no equilibrium is ever found. The typical dynamics is reflected in the figure below, which shows the difference in payoffs to the two players (Alice's - Bob's) over time. Even though the two agents work hard to learn the optimal strategies, the complexity of the game prevents their success, and the game shows no signs whatsoever of settling down:

As the authors note, this kind of rich, complex, ongoing dynamics looks quite similar to what one sees in real systems such as financial markets (the time series above exhibits clustered volatility, as do market fluctuations). There are periods of relative calm punctured by bouts of extreme volatility. Yet there's nothing intervening here -- no "shocks" to the system -- which would create these changes. It all comes from perfectly natural internal dynamics. And this is in a game with N = 50 strategies. It seems likely things will only grow more chaotic and less likely to settle down if N is larger than 50, as in the real world, or if the number of players grows beyond two.

Hence, I see this as a rather profound demonstration of the likely irrelevance of equilibrium analyses coming from game theory to complex real world settings. Dynamics really matters and cannot be theorized out of existence, however hard economists may try. As the paper concludes:

Our results suggest that under many circumstances it is more useful to abandon the tools of classic game theory in favor of those of dynamical systems. It also suggests that many behaviors that have attracted considerable interest, such as clustered volatility in nancial markets, may simply be specific examples of a highly generic phenomenon, and should be expected to occur in a wide variety of different situations.

Twitter follow

Search This Blog

Loading...

This blogexplores the potential for the transformation of economics and finance through the inspiration of physics and the other natural sciences. If traditional economics has emphasized self-regulation and market equilibrium, the new perspective emphasizes the myriad positive feed backs that often drive markets away from equilibrium and cause tumultuous crashes and other crises. Read more about the idea.

Please see my new book FORECAST (cover below)

Who am I?

Physicist and science writer. I was formerly an editor with the international science journal Nature and also the magazine New Scientist. I am the author of three earlier books, and have written extensively for publications including Nature, Science, the New York Times, Wired and the Harvard Business Review. I currently write monthly columns for Nature Physics and for Bloomberg Views.