Thursday, December 22, 2011

What does the (tentatively discovered) Higgs Boson have to do with finance? Nothing. At least nothing obvious. Still, it's a fascinating topic with grand sweeping themes. I've written an essay on it for Bloomberg Views, which will appear later tonight or tomorrow. I'll add the link when it does.

The interesting thing to me about the Higgs Boson, and the associated Higgs field (the boson is an elementary excitation of this field), is the intellectual or conceptual history of the idea. It seems crazy to think that as the universe cooled (very shortly after the big bang) a new field, the Higgs field, suddenly appeared, filling all space, and giving particles mass in proportion to how strongly they interact with that field. It would be a crazy idea if it were just a proposal pulled out of thin air. But the history is that Higgs' work (and the work of many others at the same time, the early 1960s) had very strong stimulation from the BCS theory of superconductivity of ordinary metals, which appeared in 1957.

That theory explained how superconductivity originates through the emergence below a critical temperature of a condensate of paired electrons (hence, bosons) which acts as an extremely sensitive electromagnetic medium. Try to impose a magnetic field inside a superconductor (by bringing a magnet close, for example) and this condensate or field will respond by stirring up currents which act precisely to cancel the field inside the superconductor. This is the essence of superconductivity -- its appearance changes physics inside the superconductor in such a way that electromagnetic fields cannot propagate. In quantum terms (from quantum electrodynamics), this is equivalent to saying that the photon -- the carrier of the electromagnetic fields -- comes to have a mass. It does so because it interacts very strongly with the condensate.

This idea from superconductivity is pretty much identical to the Higgs mechanism for giving the W and Z particles (the carriers of the weak force) mass. This is what I think is fascinating. The Higgs prediction arose not so much from complex mathematics, but from the use of analogy and metaphor -- I wonder if the universe is in some ways like a superconductor? If we're living in a superconductor (not for ordinary electrical charge, but for a different kind of charge of the electroweak field), then it's easy to understand why the W and Z particles have big masses (more than 100 times the mass of the proton). They're just like photons traveling inside an ordinary superconductor -- inside an ordinary metal, lead or tin or aluminum, cooled down to low temperatures.

I think it's fitting that physics theory so celebrated for bewildering mathematics and abstraction beyond ordinary imagination actually has its roots in the understanding of grubby things like magnets and metals. That's where the essential ideas were born and found their initial value.

Having said that none of this has anything to do with finance, however, I should mention a fascinating proposal from 2000 by Per Bak, Simon Nørrelykke and Martin Shubik, which draws a very close analogy between the process which determines the value of money and any Higgs-like mechanism. They made the observation that the absolute value of money is essentially undetermined:

The value of money represents a “continuous symmetry”. If, at some point, the value of money was globally redefined by a certain factor, this would have no consequences whatsoever. Thus, in order to arrive at a specific value of money, the continuous symmetry must be broken.

In other words, a loaf of bread could be worth $1, $10, or $100 -- it doesn't matter. But here and now in the real world it does have one specific value. The symmetry is broken.

This idea of continuous symmetry is something that arises frequently in physics. And it is indeed the breaking of a continuous symmetry that underlies the onset of superconductivity. The mathematics of field theory shows that, anytime a continuous symmetry is broken (so that some variables comes to take on one specific value), there appears in the theory a new dynamical mode -- a so-called Goldstone Mode -- corresponding to fluctuations along the direction of the continuous symmetry. This isn't quite the appearance of mass -- that takes another step in the mathematics, but this Goldstone business is a part of the Higgs mechanism.

I'll try to return to this paper again. It offers a seemingly plausible dynamical model for how a value of money can emerge in an economy, and also why it should be subject to strong inherent fluctuations (because of the Goldstone mode). None of this comes out of equilibrium theory, nor should one expect it to as money is an inherently dynamical thing -- we use it as a tool to manage activities through time, selling our services today to buy food next week, for example.

Tuesday, December 13, 2011

I wanted to respond to several insightful comments on my recent post on power laws in finance. And, after that, pose a question on the economics/finance history of financial time series that I hope someone out there might be able to help me with.

Why exactly is power-law distribution for asset returns inconsistent with EMH? It is trivial to write "standard" economic model where returns have fat tails, e.g. if we assume that stochastic process for dividends / firm profits has fat tails. That of course may not be very satisfactory explanation, but it still shows that EMH != normal distribution. In fact, Fama wrote about non-gaussian returns back in 1960's (and Mandelbrot before him), so the idea is not exactly new. The work you describe here is certainly useful and interesting, but pure patterns in data (or "stylized facts", as economists would call them) by themselves are not enough - we need some theory to make sense of them, and it would be interesting to hear more about contributions from econophysics in that area.

It's also worth pointing out that EMH, as I understand it, doesn't assume or dismiss that returns follow some specific distribution. Rather, EMH simply posits that prices reflect known information. For many years, analysts presumed that EMH implies a random distribution, but the empirical record says otherwise. But the random walk isn't a condition of EMH. Andrew Lo of MIT has discussed this point at length. The market may or may not be efficient, but it's not conditional on random price fluctuations. Separately, ivansmi makes a good point about models. You need a model to reject EMH. But that only brings you so far. Let's say we have a model of asset pricing that rejects EMH. Then the question is whether EMH or the model is wrong? That requires another model. In short, it's ultimately impossible to reject or accept EMH, unless of course you completely trust a given model. But that brings us back to square one. Welcome to economics.

I actually agree with these statements. Let me try to clarify. In my post I said, referring to the fat tails in returns and 1/t decay of volatility correlations, that "None of these patterns can be explained by anything in the standard economic theories of markets (the EMH etc)." The key word is of course "explained."

The EMH has so much flexibility and is so loosely linked to real data that it is indeed consistent with these observations, as Ivansml (Mark) and James rightly point out. I think it is probably consistent with any conceivable time series of prices. But "being consistent with" isn't a very strong claim, especially if the consistency comes from making further subsidiary assumptions about how these fat tails might come from fluctuations in fundamental values. This seems like a "just so" story (even if the idea that fluctuations in fundamental values could have fat tails is not at all preposterous).

The point I wanted to make is that nothing (that I know of) in traditional economics/finance (i.e. coming out of the EMH paradigm) gives a natural and convincing explanation of these statistical regularities. Such an explanation would start from simple well accepted facts about the behaviour of individuals, firms, etc., market structures and so on, and then demonstrate how -- because of certain logical consequences following from these facts and their interactions -- we should actually expect to find just these kinds of power laws, with the same exponents, etc., and in many different markets. Reading such an explanation, you would say "Oh, now I see where it comes from and how it works!"

To illustrate some possibilities, one class of proposed explanations sees large market movements as having inherently collective origins, i.e. as reflecting large avalanches of trading behaviour coming out of the interactions of market participants. Early models in this class include the famous Santa Fe Institute Stock Market model developed in the mid 1990s. This nice historical summary by Blake LeBaron explores the motivations of this early agent-based model, the first of which was to include a focus on the interactions among market participants, and so go beyond the usual simplifying assumptions of standard theories which assume interactions can be ignored. As LeBaron notes, this work began in part...

... from a desire to understand the impact of agent interactions and group learning dynamics in a financial setting. While agent-based markets have many goals, I see their first scientific use as a tool for understanding the dynamics in relatively traditional economic models. It is these models for which economists often invoke the heroic assumption of convergence to rational expectations equilibrium where agents’ beliefs and behavior have converged to a self-consistent world view. Obviously, this would be a nice place to get to, but the dynamics of this journey are rarely spelled out. Given that financial markets appear to thrive on diverse opinions and behavior, a first level test of rational expectations from a heterogeneous learning perspective was always needed.

I'm going to write posts on this kind of work soon looking in much more detail. This early model has been greatly extended and had many diverse offspring; a more recent review by LeBaron gives an updated view. In many such models one finds the natural emergence of power law distributions for returns, and also long-term correlations in volatility. These appear to be linked to various kinds of interactions between participants. Essentially, the market is an ecology of interacting trading strategies, and it has naturally rich dynamics as new strategies invade and old strategies, which had been successful, fall into disuse. The market never settles into an equilibrium, but has continuous ongoing fluctuations.

Now, these various models haven't yet explained anything, but they do pose potentially explanatory mechanisms, which need to be tested in detail. Just because these mechanisms CAN produce the right numbers doesn't mean this is really how it works in markets. Indeed, some physicists and economists working together have proposed a very different kind of explanation for the power law with exponent 3 for the (cumulative) distribution of returns which links it to the known power law distribution of the wealth of investors (and hence the size of the trades they can make). This model sees large movements as arising in the large actions of very wealthy market participants. However, this is more than merely attributing the effect to unknown fat tails in fundamentals, as would be the case with EMH based explanations. It starts with empirical observations of tail behaviour in several market quantities and argues that these together imply what we see for market returns.

There are more models and proposed explanations, and I hope to get into all this in some detail soon. But I hope this explains a little why I don't find the EMH based ideas very interesting. Being consistent with these statistical regularities is not as interesting as suggesting clear paths by which they arise.

Of course, I might make one other point too, and maybe this is, deep down, what I find most empty about the EMH paradigm. It essentially assumes away any dynamics in the market. Fundamentals get changed by external forces and the theory supposes that this great complex mass of heterogenous humanity which is the market responds instantaneously to find the new equilibrium which incorporates all information correctly. So, it treats the non-market part of the world -- the weather, politics, business, technology and so on -- as a rich thing with potentially complicated dynamics. Then it treats the market as a really simply dynamical thing which just gets driven in slave fashion by the outside. This to me seems perversely unnatural and impossible to take seriously. But it is indeed very difficult to rule out with hard data. The idea can always be contorted to remain consistent with observations.

In one of Taleeb's books, didn't he make mention that something cannot be proven true, only disproven? I think it was the whole swan thing - if you have an appropriate sample and count 100% white swans does not prove there are ONLY white swans, while a sample that has a black one proves that there are not ONLY white swans.

Again, I agree completely. This is a basic point about science. We don't ever prove a theory, only disprove it. And the best science works by trying to find data to disprove a hypothesis, not by trying to prove it.

I assume David is referring to my discussion of the empirical cubic power law for market returns. This is indeed a tentative stylized fact which seems to hold with appreciable accuracy in many markets, but there may well be markets in which it doesn't hold (or periods in which the exponent changes). Finding such deviations would be very interesting as it might offer further clues as to the mechanism behind this phenomenon.

NOW, for the question I wanted to pose. I've been doing some research on the history of finance, and there's something I can't quite understand. Here's the problem:

1. Mandelbrot in the early 1960s showed that market returns had fat tails; he conjectured that they fit the so-called Stable Paretian (now called Stable Levy) distributions which have power law tails. These have the nice property (like the Gaussian) that the composition of the returns for longer intervals, built up from component Stable Paretian distributions, also has the same form. The market looks the same at different time scales.
2. However, Mandelbrot noted in that same paper a shortcoming of his proposal. You can't think of returns as being independent and identically distributed (i.i.d.) over different time intervals because the volatility clusters -- high volatility predicts more to follow, and vice versa. We don't just have an i.i.d. process.
3. Lots of people documented volatility clustering over the next few decades, and in the 1980s Robert Engle and others introduced ARCH/GARCH and all that -- simple time series models able to reproduce the realistic properties of financial times, including volatility clustering.
4. But today I found several papers from the 1990s (and later) still discussing the Stable Paretian distribution as a plausible model for financial time series.

My question is simply -- why was anyone even 20 years ago still writing about the Stable Paretian distribution when the reality of volatility clustering was so well known? My understanding is that this distribution was proposed as a way to save the i.i.d. property (by showing that such a process can still create market fluctuations having similar character on all time scales). But volatility clustering is enough on its own to rule out any i.i.d. process.

Of course, the Stable Paretian business has by now been completely ruled out by empirical work establishing the value of the exponent for returns, which is too large to be consistent with such distributions. I just can't see why it wasn't relegated to the history books long before.

The only possibility, it just dawns on me, is that people may have thought that some minor variation of the original Mandelbrot view might work best. That is, let the distribution over any interval be Stable Paretian, but let the parameters vary a little from one moment to the next. You give up the i.i.d. but might still get some kind of nice stability properties as short intervals get put together into longer ones. You could put Mandelbrot's distribution into ARCH/GARCH rather than the Gaussian. But this is only a guess. Does anyone know?

Friday, December 9, 2011

The following is a script of "Prosecuting Wall Street" (CBS) which aired on Dec. 4, 2011. Steve Kroft is correspondent, James Jacoby, producer.

It's been three years since the financial crisis crippled the American economy, and much to the consternation of the general public and the demonstrators on Wall Street, there has not been a single prosecution of a high-ranking Wall Street executive or major financial firm even though fraud and financial misrepresentations played a significant role in the meltdown. We wanted to know why, so nine months ago we began looking for cases that might have prosecutorial merit. Tonight you'll hear about two of them. We begin with a woman named Eileen Foster, a senior executive at Countrywide Financial, one of the epicenters of the crisis.

Steve Kroft: Do you believe that there are people at Countrywide who belong behind bars?

Eileen Foster: Yes.

Kroft: Do you want to give me their names?

Foster: No.

Kroft: Would you give their names to a grand jury if you were asked?

Foster: Yes.

But Eileen Foster has never been asked - and never spoken to the Justice Department - even though she was Countrywide's executive vice president in charge of fraud investigations...

Tuesday, December 6, 2011

My latest column in Bloomberg looks very briefly at some of the basic mathematical patterns we know about in finance. Science has a long tradition of putting data and observation first. Look very carefully at what needs to be explained -- mathematical patterns that show up consistently in the data -- and then try to build simple models able to reproduce those patterns in a natural way.

This path has great promise in economic finance, although it hasn't been pursued very far until recently. My Bloomberg column gives a sketch of what is going on, but I'd like to give a few more details here and some links.

The patterns we find in finance are statistical regularities -- broad statistical patterns which show up in all markets studied, with an impressive similarity across markets in different countries and for markets in different instruments. The first regularity is the distribution of returns over various time intervals, which has been found generically to have broad power law tails -- "fat tails" -- implying that large fluctuations up or down are much more likely than they would be if markets fluctuated in keeping with normal Gaussian statistics. Anyone who read The Black Swan knows this.

This pattern has been established in a number of studies over the past 15 years or so, mostly by physicist Eugene Stanley of Boston University and colleagues. This paper from 1999 is perhaps the most notable, as it used enormous volumes of historical data to establish the fat tailed pattern for returns over times ranging from one minute up to about 4 days. One of the most powerful things about this approach is that it doesn't begin with any far reaching assumptions about human behaviour, the structure of financial markets or anything else, but only asks -- are there patterns in the data? As the authors note:

The most challenging difficulty in the study of a financial market is that the nature of the interactions between the different elements comprising the system is unknown, as is the way in which external factors affect it. Therefore, as a starting point, one may resort to empirical studies to help uncover the regularities or “empirical laws” that may govern financial markets.

This strategy seems promising to physicists because it has worked in building theories of complex physical systems -- liquids, gases, magnets, superconductors -- for which it is also often impossible to know anything in great detail about the interactions between the molecules and atoms within. This hasn't prevented the development of powerful theories because, as it turns out, many of the precise details at the microscopic level DO NOT influence the large scale collective properties of the system. This has inspired physicists to think that the same may be true in financial markets -- at least some of the collective behaviour we see in markets, their macroscopic behaviour, may be quite insensitive to details about human decision making, market structure and so on.

The authors of this 1999 study summarized their findings as follows:

Several points of clarification. First, the result for the power law with exponent close to 3 is a result for the cumulative distribution. That is, the probability that a return will be greater than a certain value (not just equal to that value). Second, the fact that this value lies outside of the range [0,2] means that the process generating these fluctuations isn't a simple stationary random process with an identical and independent distribution for each time period. This was the idea initially proposed by Benoit Mandelbrot on the basis of the so-called Levy Stable distributions. This study and others have established that this idea can't work -- something more complicated is going on.

That complication is also referred to in the second paragraph above. If you take the data on returns at the one minute level, and randomize the order in which it appears, then you still get the same power law tails in the distribution of returns over one minute. That's the same data. But this new time series has different returns over longer times, generated by combining sequences of the one minute returns. The distribution over longer and longer times turns out to converge slowly to a Gaussian for the randomized data, meaning that the true fat tailed distribution over longer times has its origin in some rich and complex correlations in market movements at different times (which gets wiped out by the randomization). Again, we're not just dealing with a fixed probability distribution and independent changes over different intervals.

To read more about this, see this nice review by Xavier Gabaix of MIT. It covers this and many other power laws in finance and economics.

Now, the story gets even more interesting if you look past the mere distribution of returns and study the correlations between market movements at different times. Market movements are, of course, extremely hard to predict. But it is very interesting where the unpredictability comes in.

The so-called autocorrelation of the time series of market returns decays to zero after a few minutes. This is essentially a measure of how much the return now can be used to predict a return in the future. After a few minutes, there's nothing. This is the sense in which the markets are unpredictable. However, there are levels of predictability. It was discovered in the early 1990s, and has been confirmed many times since in different markets, that the time series of volatility -- the absolute value of the market return -- has long-term correlations, a kind of long-term memory. Technically, the autocorrelation of this time series only decays to zero very slowly.

This is shown below in the following figure (from a representative paper, again from the Boston University group) which shows the autocorrelation of the return time series g(t) and also of the volatility, which is the absolute value of g(t):

Clearly, whereas the first signal shows no correlations after about 10 minutes, the second shows correlations and predictability persisting out to times as long as 10,000 minutes, which is on the order of 10 days or so.

So, its the directionality of price movements which has very little predictability, whereas the magnitude of changes follows a process with much more interesting structure. It is in the record of this volatility that one sees potentially deep links to other physical processes, including earthquakes. A particularly interesting paper is this one, again by the Boston group, quantifying several ways in which market volatility obeys several quantitative laws known from earthquake science, especially the Omori Law describing how the probability of aftershocks decays following a main earthquake. This probability decays quite simply in proportion to 1/time since the main quake, meaning that aftershocks are most likely immediately afterward, and become progressively less likely with time. Episodes of high volatility appear to follow similar behaviour quite closely.

Perhaps even better is another study, which looks at the link to earthquakes with a somewhat tighter focus. The abstract captures the content quite well:

We analyze the memory in volatility by studying volatility return intervals, defined as the time between two consecutive fluctuations larger than a given threshold, in time periods following stock market crashes. Such an aftercrash period is characterized by the Omori law, which describes the decay in the rate of aftershocks of a given size with time t by a power law with exponent close to 1. A shock followed by such a power law decay in the rate is here called Omori process. We find self-similar features in the volatility. Specifically, within the aftercrash period there are smaller shocks that themselves constitute Omori processes on smaller scales, similar to the Omori process after the large crash. We call these smaller shocks subcrashes, which are followed by their own aftershocks. We also show that the Omori law holds not only after significant market crashes as shown by Lillo and Mantegna [Phys. Rev. E 68, 016119 2003], but also after “intermediate shocks.” ...

These are only a few of the power law type regularities now known to hold for most markets, with only very minor differences between markets. An important effort is to find ways to explain these regularities in simple and plausible market models. None of these patterns can be explained by anything in the standard economic theories of markets (the EMH etc). They can of course be reproduced by suitably generating time series using various methods, but that hardly counts as explanation -- that's just using time series generators to reproduce certain kinds of data.

The promise of finding these kinds of patterns is that they may strongly constrain the types of theories to be considered for markets, by ruling out all those which do not naturally give rise to this kind of statistical behaviour. This is where data matters most in science -- by proving that certain ideas, no matter how plausible they seem, don't work. This data has already stimulated the development of a number of different avenues for building market theories which can explain the basic statistics of markets, and in so doing go well beyond the achievements of traditional economics.

Friday, December 2, 2011

Dave Cliff of the University of Bristol is someone whose work I've been meaning to look at much more closely for a long time. Essentially he's an artificial intelligence expert, but has has devoted some of his work to developing trading algorithms. He suggests that many of these algorithms, even one working on extremely simple rules, consistently outperform human beings, which rather undermines the common economic view that people are highly sophisticated rational agents.

I just noticed tht Moneyscience is beginning a several part interview with Cliff, the first part having just appeared. I'm looking forward to the rest. Some highlights from Part I, beginning with Cliff's early work, mid 1990s, on writing algorithms for trading:

I wrote this piece of software called ZIP, Zero Intelligence Plus. The intention was for it to be as minimal as possible, so it is a ridiculously simple algorithm, almost embarrassingly so. It’s essentially some nested if-then rules, the kind of thing that you might type into an Excel spreadsheet macro. And this set of decisions determines whether the trader should increase or decrease a margin. For each unit it trades, has some notion of the price below which it shouldn’t sell or above which it shouldn’t buy and that is its limit price. However, the price that it actually quotes into the market as a bid or an offer is different from the limit price because obviously, if you’ve been told you can buy something and spend no more than ten quid, you want to start low and you might be bidding just one or two pounds. Then gradually, you’ll approach towards the ten quid point in order to get the deal, so with each quote you’re reducing the margin on the trade. The key innovation I introduced in my ZIP algorithm was that it learned from its experience. So if it made a mistake, it would recognize that mistake and be better the next time it was in the same situation.

HFTR: When was this exactly?

DC: I did the research in 1996 and HP published the results, and the ZIP program code, in 1997. I then went on to do some other things, like DJ-ing and producing algorithmic dance music (but that’s another story!)

Fast-forward to 2001, when I started to get a bunch of calls because a team at IBM’s Research Labs in the US had just completed the first ever systematic experimental tests of human traders competing against automated, adaptive trading systems. Although IBM had developed their own algorithm called MGD, (Modified Gjerstad Dickhaut), it did the same kind of thing as my ZIP algorithm, using different methods. They had tested out both their MGD and my ZIP against human traders under rigorous experimental conditions and found that both algorithms consistently beat humans, regardless of whether the humans or robots were buyers or sellers. The robots always out-performed the humans.

IBM published their findings at the 2001 IJCAI conference (the International Joint Conference on AI) and although IBM are a pretty conservative company, in the opening paragraphs of this paper they said that this was a result that could have financial implications measured in billions of dollars. I think that implicitly what they were saying was there will always be financial markets and there will always be the institutions (i.e. hedge funds, pension management funds, banks, etc). But the traders that do the business on behalf of those institutions would cease to be human at some point in the future and start to be machines.

Personally, I think there are two important things here. One is that, yes, trading will probably soon become almost all algorithmic. This may tend to make you think the markets will become more mechanical, their collective behaviour emerging out of the very simple actions of so many crude programs.

But the second thing is what this tells us about people -- that traders and investors and people in general aren't so clever or rational, and most of them have probably been following fairly simple rules all along, rules that machines can easily beat. So there's really no reason to think the markets should become more mechanical as they become more algorithmic. They've probably been quite mechanical all along, and algorithmic too -- it's just that non-rational zero intelligence automatons running the algorithms were called people.

Wednesday, November 30, 2011

I wrote a post a while back exploring some of the silliest things economists were saying before the crisis about how financial engineering was making our economy more robust, stable, efficient, wonderful, beautiful, intelligent, self-regulating, and so on. The markets were, R. Glenn Hubbard and William Dudley were convinced, even leading to better governance by punishing bad governmental decisions. [How that could be the case when markets have a relentless focus on the very short term is hard to fathom, but they indeed did assert this]..

Paul Krugman has recently undertaken a similar exercise in silliness mining -- in this case going through the hallucinations of Alan Greenspan. The Chairman of the Fed was evidently drinking the very same Kool-Aid:

Deregulation and the newer information technologies have joined, in the United States and elsewhere, to advance flexibility in the financial sector. Financial stability may turn out to have been the most important contributor to the evident significant gains in economic stability over the past two decades.

Historically, banks have been at the forefront of financial intermediation, in part because their ability to leverage offers an efficient source of funding. But in periods of severe financial stress, such leverage too often brought down banking institutions and, in some cases, precipitated financial crises that led to recession or worse. But recent regulatory reform, coupled with innovative technologies, has stimulated the development of financial products, such as asset-backed securities, collateral loan obligations, and credit default swaps, that facilitate the dispersion of risk.

Conceptual advances in pricing options and other complex financial products, along with improvements in computer and telecommunications technologies, have significantly lowered the costs of, and expanded the opportunities for, hedging risks that were not readily deflected in earlier decades. The new instruments of risk dispersal have enabled the largest and most sophisticated banks, in their credit-granting role, to divest themselves of much credit risk by passing it to institutions with far less leverage. Insurance companies, especially those in reinsurance, pension funds, and hedge funds continue to be willing, at a price, to supply credit protection.

These increasingly complex financial instruments have contributed to the development of a far more flexible, efficient, and hence resilient financial system than the one that existed just a quarter-century ago.

As Krugman notes, this can all be translated into ordinary language: "Thanks to securitization, CDOs, and AIG, nothing bad can happen!"

I had wondered about this idea a couple years ago -- but that's all I did, wondered about it. The idea is that when banks need bailing out -- and sadly, we seem stuck with that problem for the moment -- we shouldn't bail them out directly, but indirectly. For example, just give every single person in the US $1,000. Or maybe a voucher for $1,000 that they have to spend somewhere, or put in a bank. This quickly amounts to $300 billion infused into the economy, a large portion of which would end up in banks. So cash would be pumped into the banks too, but only through people first.

You can imagine all kinds of ways to play around with such a scheme. Paying off some of peoples' mortgages. The amount injected could be much larger. Perhaps similar funds would be injected directly into banks and other businesses as well. Mark Thoma has thought through some of the details. But I'm quite surprised this is the first I've heard about any idea even remotely like this. It seems like a much better idea than just giving money to the bankers who created the problem in the first place. Why don't we hear more about such possibilities?

The endgame playing out in Europe is a tragedy in the usual sense, but also in the sense of Greek tragedy -- downfall brought about ironically through the very efforts, perhaps even well intentioned, of those ultimately afflicted. It's terrible to see Europe looming toward disaster, but also utterly fascinating that everyone involved -- Greeks, Germans, French, the European Central Bank -- has acted in what they thought was their own interest, yet those very actions have led the collective to a likely outcome much worse for all. A tragedy of the commons.

Philosopher Simon Critchley has written a brilliant essay exploring this theme more generally. Among the most poetic analyses of the situation I have seen:

The euro was the very project that was meant to unify Europe and turn a rough amalgam of states in a free market arrangement into a genuine social, cultural and economic unity. But it has ended up disunifying the region and creating perverse effects, such as the spectacular rise of the populist right in countries like the Netherlands, for just about every member state, even dear old Finland.

What makes this a tragedy is that we knew some of this all along — economic seers of various stripes had so prophesied — and still we conspired with it out of arrogance, dogma and complacency. European leaders — technocrats whom Paul Krugman dubbed this week “boring cruel romantics” — ignored warnings that the euro was a politically motivated project that would simply not work given the diversity of economies that the system was meant to cover. The seers, indeed, said it would fail; politicians across Europe ignored the warnings because it didn’t fit their version of the fantasy of Europe as a counterweight to United States’ hegemony. Bad deals were made, some lies were told, the peoples of the various member countries were bludgeoned into compliance often without being consulted, and now the proverbial chickens are coming home to roost.

But we heard nothing and saw nothing, for shame. The tragic truth that we see unspooling in the desperate attempts to shore up the European Union while accepting no responsibility for the unfolding disaster is something that we both willed and that threatens to now destroy the union in its present form.

The euro is a vast boomerang that is busy knocking over millions of people. European leaders, in their blindness, continue to act as if that were not the case.

Monday, November 28, 2011

Three interesting articles on what now seems to be considered an increasingly likely event -- the end of the Euro (in its current form, although some version might arise from the ashes).

First, Gavyn Davies speculates on several possible scenarios for the collapse of the Euro. It might persist as the new currency of a smaller union including Germany and The Netherlands (in which case the value of the Euro would rise significantly), or it might persist as the new currency of the periphery countries after Germany bolts (in which case the value of the Euro would fall significantly). Or the Europeans might finally find a way through the ongoing nightmare. Not betting on that one.

Second, Satyajit Das goes into a little more detail, and I think rightly sees some cultural issues as ultimately being most important. The three logical possibilities are easy to list:

The latest plan has bought time, though far less than generally assumed. The European debt endgame remains the same: fiscal union (greater integration of finances where Germany and the stronger economies subsidise the weaker economies); debt monetisation (the ECB prints money); or sovereign defaults.

Germany may be largely in favour of solution number 1. But the smaller periphery countries, and perhaps France as well, will favour solution number 2. Hence, we may by default find Europe hurtling inexorably into "solution" number 3 -- sovereign defaults:

The accepted view is that, in the final analysis, Germany will embrace fiscal integration or allow printing money. This assumes that a cost-benefit analysis indicate that this would be less costly than a disorderly break-up of the Euro-zone and an integrated European monetary system. This ignores a deep-seated German mistrust of modern finance as well as a strong belief in a hard currency and stable money. Based on their own history, Germans believe that this is essential to economic and social stability. It would be unsurprising to see Germany refuse the type of monetary accommodation and open-ended commitment necessary to resolve the crisis by either fiscal union or debt monetisation.

Unless restructuring of the Euro, fiscal union or debt monetisation can be considered, sovereign defaults may be the only option available.

Perhaps it betrays a little bit of anarchy in my own soul, but I'm rooting quite hard for sovereign defaults. I wish the Greeks had gone ahead with their referendum. For all the complaining about the slack morals of the Greek taxpayer, every debt-creating transaction has two sides -- and the creditors (French and German banks) bear as much responsibility as the debtors.

Then again, the end is likely to bring some severe social misery, not to mention riots (the UK is already advising its European embassies on the likelihood). A third article by Simon Johnson and Peter Boone points ominously in this direction, essentially echoing Davies' analysis in bleaker language:

The path of the euro zone is becoming clear. As conditions in Europe worsen, there will be fewer euro-denominated assets that investors can safely buy. Bank runs and large-scale capital flight out of Europe are likely.

Devaluation can help growth but the associated inflation hurts many people and the debt restructurings, if not handled properly, could be immensely disruptive. Some nations will need to leave the euro zone. There is no painless solution.

Ultimately, an integrated currency area may remain in Europe, albeit with fewer countries and more fiscal centralization. The Germans will force the weaker countries out of the euro area or, more likely, Germany and some others will leave the euro to form their own currency. The euro zone could be expanded again later, but only after much deeper political, economic and fiscal integration.

Tragedy awaits. European politicians are likely to stall until markets force a chaotic end upon them. Let’s hope they are planning quietly to keep disorder from turning into chaos.

Friday, November 18, 2011

I've had no time to post recently for several reasons, mostly the urgent need to work on a book closely related to this blog. The deadline is getting closer. I hope to resume something like my previous posting frequency soon.

But I would like to point everyone to a fascinating recent analysis of economists' opinions about the scientific method (that seems the best term for it, at least). Ole Rogeberg, a reader of this blog, alerted me to some work by himself and Hans Melberg in which they surveyed economists to see how much they looked to actual empirical tests of a theory's predictions in judging the value of a theory. The answer, it turns out, is -- not much. Internal consistency seems to be more important than empirical test.

This even for a theory -- the theory of "rational addiction", which seeks to explain heroin addiction and other life destroying addictions as the consequence of fully rational choices on the part of individuals as they maximize their expected utility over their lifetimes -- which on the face of it seems highly unlikely, making the burden of empirical evidence (one would think) even higher. Some history. Gary Becker (Nobel Prize) of the University of Chicago is famous for his efforts to push the neo-classical framework into every last corner of human life. He (and many followers) have applied the trusted old recipe of utility maximization to understand (they claim) everything from crime to patterns of having children to addiction. You may see a slobbering shivering drunk or junkie in an alleyway in winter and think -- like most people -- there goes someone trapped in some very destructive behavioural feedback controlled by the interaction of addictive physical substances, emotions and so on. Not Becker. It's all quite rational, he argues.

Now, Rogeberg and Melberg. Here's their abstract:

This paper reports on results from a survey of views on the theory of rational addiction among academics who have contributed to this research. The topic is important because if the literature is viewed by its participants as an intellectual game, then policy makers should be aware of this so as not to derive actual policy from misleading models. A majority of the respondents believe the literature is a success story that demonstrates the power of economic reasoning. At thesame time, they also believe the empirical evidence to be weak, and they disagree both on the typeof evidence that would validate the theory and the policy implications. These results shed light on how many economists think about model building, evidence requirements and the policy relevance of their work.

Now, in any area of science there are disgreements over what evidence really counts as important. I've certainly learned this from following 20 years of research on high temperature superconductivity, where every new paper with "knock down" evidence for some claim tends to be immediately countered by someone else claiming this evidence actually shows something quite different. The materials are complex as is the physics, and so far it just doesn't seem possible to bring clarity to the subject.

But in high-Tc research, theorists are under no illusion that they understand. They readily admit that they have no good theory. The same attitude doesn't seem to have been common in economics. Rogeberg and Melberg have also described their survey work in this clearly written paper in a less technical style.

A few more choice excerpts from their (full) paper below:

The core of the causal insight claims from rational addiction research is that people behave in a certain way (i.e. exhibit addictive behavior) because they face and solve a specific type of choice problem. Yet rational addiction researchers show no interest in empirically examining the actual choice problem – the preferences, beliefs, and choice processes – of the people whose behavior they claim to be explaining. Becker has even suggested that the rational choice process occurs at some subconscious level that the acting subject is unaware of, making human introspection irrelevant and leaving us no known way to gather relevant data...

The claim of causal insight, then, involves the claim that a choice problem people neither face nor would be able to solve prescribes an optimal consumption plan no one is aware of having. The gradual implementation of this unknown plan is then claimed to be the actual explanation for why people over time smoke more than they should according to the plans they actually thought they had. To quote Bertrand Russell out of context, this ‘is one of those views which are so absurd that only very learned men could possibly adopt them’ (Russell 1995, p. 110).

On the nature of reasoning in rational addiction models (this is Nobel Prize winning stuff, by the way):

[The addict]... looks strange because he sits down at (the first) period, surveys future income, production technologies, investment/addiction functions and consumption preferences over his lifetime to period T, maximizes the discounted value of his expected utility and decides to be an alcoholic. That’s the way he will get the greatest satisfaction out of life. (Winston 1980, p. 302)

Monday, November 7, 2011

The following is a response I just posted on Bloomberg to some criticism last week coming from the International Swaps and Derivatives Association. They took issue with some things I had written in my latest Bloomberg column. I think their comment was partially fair, and also partially misleading, so I thought some clarification would be useful. The text below is identical to what appears (or will very shortly) in Bloomberg:

*********************************

My most recent Bloomberg column on the network of credit default swaps contracts provoked a comment from the International Swaps and Derivatives Association, Inc. The group objected to my characterization of the network of outstanding CDS contracts as "hidden" and potentially a source of trouble. I'd like to address their concerns, and also raise some questions.

Contrary to the association's claim, I am aware of the existence of the Depository Trust & Clearing Corporation. I'll admit to having underestimated how much their project to create a warehouse of information on CDS contracts has developed in the past few years; my statement that these contracts are not "recorded by any central repository" was too strong, as a partial repository does exist, and the DTCC deserves great credit for creating it.

However, it is not clear that this repository gives such a complete picture of outstanding CDS linkages that we can all relax.

For example, DTCC's repository covers 98 percent of all outstanding CDS contracts, not 100 percent. Asking why may or may not be a quibble. After all, a map showing 98 percent of the largest 300 cities in the U.S. could leave out New York, Los Angeles, Chicago, Houston and Philadelphia. Moreover, the simple number of contracts tells us nothing about the values listed on those contracts. In principle, the missing 2 percent of contracts could represent a significant fraction of the outstanding value of CDS contracts.

More importantly, when thinking about potentially cascading risks in a complex network, fine details of the network topology -- its architecture or wiring diagram -- matter a lot. Indeed, the CDS contracts that put American Insurance Group Inc. in grave danger in 2008 represented a tiny fraction -- much less than 1 percent -- of the total number of outstanding CDS contracts.

Hence, it would be interesting to know why the repository holds only 98 percent rather than 100 percent. There may be a very simple and reassuring answer, but it's not readily apparent from DTCC's description of the repository.

Also, there is another issue which makes "fully transparent" not quite the right phrase for this network of contracts, even if we suppose the 98 percent leaves out nothing of importance.

The DTCC commendably makes its data available to regulators. Still, it appears that the full network of interdependencies created by CDS contracts may remain opaque to regulators, because DTCC, according to its own description, enables...

"... each regulator to access reports tailored to their specific entitlements as a market regulator, prudential or primary supervisor, or central bank. These detailed reports are created for each regulator to show only the CDS data relevant to its jurisdiction, regulated entities or currency, at the appropriate level of aggregation."

This would imply, for example, that regulators in the U.S. can look and see which of their banks have sold CDS on, say, a big German bank. But the health of the U.S. banks then depends directly on the health of that German bank, which may in turn have sold CDS on Greek or Italian debt or any number of other things. The DTCC data on the latter CDS contracts would, apparently, not be available to U.S. regulators, being out of their jurisdiction.

The point is that a financial institution is at risk not only from contracts it has entered into, but also from contracts that its many counterparties have entered into (this is the whole idea of systemic risk linked to the possibility of contagion). Credible tests of the financial network's resilience require a truly global analysis of the potential pathways along which distress (particularly from outright counterparty failures) may spread. It's not clear that any regulator has the full data on which such an analysis can be based.

None of this, by any means, is meant as a criticism of DTCC or what it has done in the past few years. The 98 percent figure is impressive, and let's hope the 98 percent soon becomes 100 percent and the DTCC finds a way to make ALL information in the repository available to regulators everywhere. Even better would be full disclosure to the public.

Of course, nothing in the comment from the International Swaps and Derivatives Association changes the main point of my column, which was that it is incorrect to believe that more CDS contracts -- or, more generally, more financial interdependencies of any kind, including links created by other derivatives such as interest-rate swaps -- automatically lead to better risk-sharing and a safer banking system. More apparent risk-sharing can actually mean more systemic risk and less overall banking safety.

Wednesday, November 2, 2011

I highly recommend this video of a talk given recently by Sony Kapoor at the Global Systems Dynamics Workshop in Berlin. As it happens, I was there and got to see the talk in person; it's funny and very insightful. Kapoor used to work at Lehman Bros (well before it's collapse), and eventually quit investment banking to do more useful things -- he now works at Re-define, a think tank on public policy.

I tried to embed the video here but failed. There seems to be embed protection on it for some reason.

There were a number of other great talks at the meeting as well, all available on video here.

John Kay makes a very good point -- that the ideology and rhetoric surrounding the allegedly wonderful properties of markets has taken us a long way from where we ought to be. We need a more balanced perspective on what markets do well and what they do not do well, where they are useful and where they are not:

A semantic confusion leads us to use the word market to describe both the process which puts food on our table and the activity of gambling in credit default swaps. That confusion has enabled people to claim the virtues of the former for the latter.

In his book Extreme Money, Satyajit Das makes a closely related point which, I'm sure, many economists and finance people will probably find incomprehensible:

Banks are utilities matching borrowers and savers, providing payment services, facilitating hedging etc. The value added comes from reducing the cost of doing so. Paul Volcker questioned the role of finance: “I wish someone would give me one shred of neutral evidence that financial innovation has led to economic growth — one shred of evidence. US financial services increased its share of value added from 2% to 6.5% but Is that a reflection of your financial innovation, or just a reflection of what you’re paid?”

The idea of financial services as a driver of economic growth is absurd – it’s a bit like looking at a car’s gearbox as the basis for propulsion. But financiers don’t necessarily agree with this assessment, unsurprisingly.

Sunday, October 30, 2011

I have a column coming out in Bloomberg Views sometime this evening (US time). It touches on the European debt crisis and the issue of outstanding credit default swaps. This post is intended to provide a few more technical details on the study by Stefano Battiston and colleagues, which I mention in the column, showing that more risk sharing between institutions can, in some cases, lead to greater systemic risk. [Note: this work was carried out as part of an ambitious European research project called Forecasting Financial Crises, which brings together economists, physicists, computer scientists and others in an effort to forge new insights into economic systems by exploiting ideas from other areas of science.]

The authors of this study start out by noting the obvious: that credit networks can both help institutions to pool resources to achieve things they couldn't on their own, and to diversify against the risks they face. At the same time, the linking together of institutions by contracts implies a greater chance for the propagation of financial stress from one place to another. The same thing applies to any network such as the electrical grid -- sharing demands among many generating stations makes for a more adaptive and efficient system, able to handle fluctuations in demand, yet also means that failures can spread across much of the network very quickly. New Orleans can be blacked out in a few seconds because a tree fell in Cleveland.

In banking, the authors note, Allen and Gale (references given in the paper) did some pioneering work on the properties of credit networks:

... in their pioneering contribution Allen and Gale reach the conclusion that if the credit network of the interbank market is a credit chain – in which each agent is linked only to one neighbor along a ring – the probability of a collapse of each and every agent (a bankruptcy avalanche) in case a node is hit by a shock is equal to one. As the number of partners of each agent increases, i.e. as the network evolves toward completeness, the risk of a collapse of the agent hit by the shock goes asymptotically to zero, thanks to risk sharing. The larger the pool of connected neighbors whom the agent can share the shock with, the smaller the risk of a collapse of the agent and therefore of the network, i.e. the higher network resilience. Systemic risk is at a minimum when the credit network is complete, i.e. when agents fully diversify individual risks. In other words, there is a monotonically decreasing relationship between the probability of individual failure/systemic risk and the degree of connectivity of the credit network.

This is essentially the positive story of risk sharing which is taken as the norm in much thinking about risk management. More sharing is better; the probability of individual failure always decreases as the density of risk-sharing links grows.

This is not what Battiston and colleagues find under slightly more general assumptions of how the network is put together and how institutions interact. I'll give a brief outline of what is different in their model in a moment; what comes out of it is the very different conclusion that...

The larger the number of connected neighbors, the smaller the risk of an individual collapse but the higher systemic risk may be and therefore the lower network resilience. In other words, in our paper, the relationship between connectivity and systemic risk is not monotonically decreasing as in Allen and Gale, but hump shaped, i.e. decreasing for relatively low degree of connectivity and increasing afterwards.

Note that they are making a distinction between two kinds of risk: 1. individual risk, arising from factors specific to one bank's business and which can make it go bankrupt, and 2. systemic risk, arising from the propagation of financial distress through the system. As in Allen and Gale, they find that individual risk DOES decrease with increasing connectivity: banks become more resistant to shocks coming from their own business, but that systemic risk DOES NOT decrease. The latter risk increases with higher connectivity, and can win out in determining the overall chance a bank might go bankrupt. In effect, the effort on the part of many banks to manage their own risks can end up creating a new systemic risk that is worse than the risk they have reduced through risk sharing.

There are two principle elements in the credit network model they study. First is the obvious fact that resilience of an institution in such a network depends on the resilience of those with whom it shares risks. Buying CDS against the potential default of your Greek bonds is all well and good as long as the bank from whom you purchased the CDS remains solvent. In the 2008 crisis, Goldman Sachs and other banks had purchased CDS from A.I.G. to cover their exposure to securitized mortgages, but those CDS would have been more or less without value had the US government not stepped in to bail out A.I.G.

The second factor model is very important, and it's something I didn't have space to mention in the Bloomberg essay. This is the notion that financial distress tends to have an inherently nonlinear aspect to it -- some trouble or distress tends to bring more in its wake. Battiston and colleagues call this "trend reinforcement, " and describe it as follows:

... trend reinforcement is also quite a general mechanism in credit networks. It can occur in at least two situations. In the first one (see e.g. in (Morris and Shin, 2008)), consider an agent A that is hit by a shock due a loss in value of some securities among her assets. If such shock is large enough, so that some of A’s creditors claim their funds back, A is forced to fire-sell some of the securities in order to pay the debt. If the securities are sold below the market price, the asset side of the balance sheet is decreasing more than the liability side and the leverage of A is unintentionally increased. This situation can lead to a spiral of losses and decreasing robustness (Brunnermeier, 2008; Brunnermeier and Pederson, 2009). A second situation is the one in which when the agent A is hit by a shock, her creditor B makes condition to credit harder in the next period. Indeed it is well documented that lenders ask a higher external finance premium when the borrowers’ financial conditions worsen (Bernanke et al., 1999). This can be seen as a cost from the point of view of A and thus as an additional shock hitting A in the next period. In both situations, a decrease in robustness at period t increases the chance of a decrease in robustness at period t + 1.

It is the interplay of such positive feedback with the propagation of distress in a dense network which causes the overall increase in systemic risk at high connectivity.

I'm not going to wade into the detailed mathematics. Roughly speaking, the authors develop some stochastic equations to follow the evolution of a bank's "robustness" R -- considered to be a number between 0 and 1, with 1 being fully robust. A bankruptcy event is marked by R passing through 0. This is a standard approach in the finance literature on modeling corporate bankruptcies. The equations they derive incorporate their assumptions about the positive influences of risk sharing and the negative influences of distress propagation and trend reinforcement.

The key result shows up clearly in the figure (below), which shows the overall probability of a bank in the network to go bankrupt (a probability per unit of time) versus the amount of risk-sharing connectivity in the network (here given by k, the number of partners with which each bank shares risks). It may not be easy to see, but the figure shows a dashed line (labeled 'baseline') which reflects the classical result on risk sharing in the absence of trend reinforcement. More connectivity is always good. But the red curve shows the more realistic result with trend reinforcement or the positive feedback associated with financial distress taken into account. Now adding connectivity is only good for a while, and eventually becomes positively harmful. There's a middle range of optimal connectivity beyond which more connections only serve to put bank in greater danger.

Finally, the authors of this paper make very interesting observations about the potential relevance of this model to globalization, which has been an experiment in risk sharing on a global scale, with an outcome -- at the moment -- which appears not entirely positive:

In a broader perspective, this conceptual framework may have far reaching implications also for the assessment of the costs and benefits of globalization. Since some credit relations involve agents located in different countries, national credit networks are connected in a world wide web of credit relationships. The increasing interlinkage of credit networks – one of the main features of globalization – allows for international risk sharing but it also makes room for the propagation of financial distress across borders. The recent, and still ongoing, financial crisis is a case in point.

International risk sharing may prevail in the early stage of globalization, i.e. when connectivity is relatively ”low”. An increase in connectivity at this stage therefore may be beneficial. On the other hand, if connectivity is already high, i.e. in the mature stage of globalization, an increase in connectivity may bring to the fore the internationalization of financial distress. An increase in connectivity, in other words, may increase the likelihood of financial crises worldwide.

Which is, in part, why we're not yet out of the European debt crisis woods.

Friday, October 28, 2011

If you haven't already heard about this new study on the network of corporate control, do have a look. The idea behind it was to use network analysis of who owns whom in the corporate world (established through stock ownership) to tease out centrality of control. New Scientist magazine offers a nice account, which starts as follows:

The study's assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.

The idea that a few bankers control a large chunk of the global economy might not seem like news to New York's Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world's transnational corporations (TNCs).

But also have a look at the web site of the project behind the study, the European project Forecasting Financial Crises, where the authors have tried to clear up several common misinterpretations of just what the study shows.

Indeed, I know the members of this group quite well. They're great scientists and this is a beautiful piece of work. If you know a little about natural complex networks, then the structures found here actually aren't terrifically surprising. However, they are interesting, and it's very important to have the structure documented in detail. Moreover, just because the structure observed here is very common in real world complex networks doesn't mean its something that is good for society.

An excellent if brief article at Salon.com gives some useful historical context to the current animosity toward bankers -- it's nothing new. Several interesting quotes from key figures in the past:

“Behind the ostensible government sits enthroned an invisible government owing no allegiance and acknowledging no responsibility to the people. To destroy this invisible government, to befoul this unholy alliance between corrupt business and corrupt politics is the ﬁrst task of statesmanship.”

Theodore Roosevelt, 1912

“We have in this country one of the most corrupt institutions the world has ever known. I refer to the Federal Reserve Board and the Federal Reserve Banks. The Federal Reserve Board, a Government board, has cheated the Government of the United States and the people of the United States out of enough money to pay the national debt. The depredations and the iniquities of the Federal Reserve Board and the Federal Reserve banks acting together have cost this country enough money to pay the national debt several times over…

“Some people think the Federal Reserve Banks are United States Government institutions. They are not Government institutions. They are private credit monopolies, which prey upon the people of the United States for the benefit of themselves and their foreign customers, foreign and domestic speculator sand swindlers, and rich and predatory money lenders.”

Louis McFadden, chairman of the House Committee on Banking and Currency, 1932

I should have known this, but didn't -- the Federal Reserve Banks are not United States Government institutions. They are indeed owned by the private banks themselves, even though the Fed has control over taxpayer funds.This seems dubious in the extreme to me, although I'm sure there are many arguments to consider. Memory recalls reading arguments about the required independence of the central bank, but independence is of course not the same as "control by the private banks." Maybe we need to change the governance of the Fed and install some oversight with real power from a non-banking non-governmental element.

And my favourite:

“Banks are an almost irresistible attraction for that element of our society which seeks unearned money.”

FBI head J. Edgar Hoover, 1955.

In recent years, the attraction has been very strong indeed.

This is why knowing history is so important. Many battles have been fought before.

Thursday, October 27, 2011

Don't miss this post by Matt Taibbi on the Occupy Wall St. movement and its roots as an anti-corruption movement:

People aren't jealous and they don’t want privileges. They just want a level playing field, and they want Wall Street to give up its cheat codes, things like:

FREE MONEY. Ordinary people have to borrow their money at market rates. Lloyd Blankfein and Jamie Dimon get billions of dollars for free, from the Federal Reserve. They borrow at zero and lend the same money back to the government at two or three percent, a valuable public service otherwise known as "standing in the middle and taking a gigantic cut when the government decides to lend money to itself."

Or the banks borrow billions at zero and lend mortgages to us at four percent, or credit cards at twenty or twenty-five percent. This is essentially an official government license to be rich, handed out at the expense of prudent ordinary citizens, who now no longer receive much interest on their CDs or other saved income. It is virtually impossible to not make money in banking when you have unlimited access to free money, especially when the government keeps buying its own cash back from you at market rates.

Your average chimpanzee couldn't fuck up that business plan, which makes it all the more incredible that most of the too-big-to-fail banks are nonetheless still functionally insolvent, and dependent upon bailouts and phony accounting to stay above water. Where do the protesters go to sign up for their interest-free billion-dollar loans?

CREDIT AMNESTY. If you or I miss a $7 payment on a Gap card or, heaven forbid, a mortgage payment, you can forget about the great computer in the sky ever overlooking your mistake. But serial financial fuckups like Citigroup and Bank of America overextended themselves by the hundreds of billions and pumped trillions of dollars of deadly leverage into the system -- and got rewarded with things like the Temporary Liquidity Guarantee Program, an FDIC plan that allowed irresponsible banks to borrow against the government's credit rating.

This is equivalent to a trust fund teenager who trashes six consecutive off-campus apartments and gets rewarded by having Daddy co-sign his next lease. The banks needed programs like TLGP because without them, the market rightly would have started charging more to lend to these idiots. Apparently, though, we can’t trust the free market when it comes to Bank of America, Goldman, Sachs, Citigroup, etc.

In a larger sense, the TBTF banks all have the implicit guarantee of the federal government, so investors know it's relatively safe to lend to them -- which means it's now cheaper for them to borrow money than it is for, say, a responsible regional bank that didn't jack its debt-to-equity levels above 35-1 before the crash and didn't dabble in toxic mortgages. In other words, the TBTF banks got better credit for being less responsible. Click on freecreditscore.com to see if you got the same deal.

STUPIDITY INSURANCE. Defenders of the banks like to talk a lot about how we shouldn't feel sorry for people who've been foreclosed upon, because it's they're own fault for borrowing more than they can pay back, buying more house than they can afford, etc. And critics of OWS have assailed protesters for complaining about things like foreclosure by claiming these folks want “something for nothing.”

This is ironic because, as one of the Rolling Stone editors put it last week, “something for nothing is Wall Street’s official policy." In fact, getting bailed out for bad investment decisions has been de rigeur on Wall Street not just since 2008, but for decades.

Time after time, when big banks screw up and make irresponsible bets that blow up in their faces, they've scored bailouts. It doesn't matter whether it was the Mexican currency bailout of 1994 (when the state bailed out speculators who gambled on the peso) or the IMF/World Bank bailout of Russia in 1998 (a bailout of speculators in the "emerging markets") or the Long-Term Capital Management Bailout of the same year (in which the rescue of investors in a harebrained hedge-fund trading scheme was deemed a matter of international urgency by the Federal Reserve), Wall Street has long grown accustomed to getting bailed out for its mistakes.

The 2008 crash, of course, birthed a whole generation of new bailout schemes. Banks placed billions in bets with AIG and should have lost their shirts when the firm went under -- AIG went under, after all, in large part because of all the huge mortgage bets the banks laid with the firm -- but instead got the state to pony up $180 billion or so to rescue the banks from their own bad decisions.

This sort of thing seems to happen every time the banks do something dumb with their money...

I have little time to post this week as I have to meet several writing deadlines, but I wanted to briefly mention this wonderful and extremely insightful speech by Adair Turner from last year (there's a link to the video of the speech here). Turner offers so many valuable perspectives that the speech is worth reading and re-reading; here are a few short highlights that caught my attention.

First, Turner mentions that the conventional wisdom about the wonderful self-regulating efficiency of markets is really a caricature of the real economic theory of markets, which notes many possible shortcomings (asymmetric information, incomplete markets, etc.). However, he also notes that this conventional wisdom is still what has been most influential in policy circles:

.. why, we might ask, do we need new economic thinking when old economic thinking has been so varied and fertile? ... Well, we need it because the fact remains that while academic economics included many strains, in the translation of ideas into ideology, and ideology into policy and business practice, it was one oversimplified strain which dominated in the pre-crisis years.

What was that "oversimplified strain"? Turner summarizes it as follows:

For over half a century the dominant strain of academic economics has been concerned with exploring, through complex mathematics, how economically rational human beings interact in markets. And the conclusions reached have appeared optimistic, indeed at times panglossian. Kenneth Arrow and Gerard Debreu illustrated that a competitive market economy with a fully complete set of markets was Pareto efficient. New classical macroeconomists such as Robert Lucas illustrated that if human beings are not only rational in their preferences and choices but also in their expectations, then the macro economy will have a strong tendency towards equilibrium, with sustained involuntary unemployment a non-problem. And tests of the efficient market hypothesis appeared to illustrate that liquid financial markets are not driven by the patterns of chartist fantasy, but by the efficient processing of all available information, making the actual price of a security a good estimate of its intrinsic value.

As a result, a set of policy prescriptions appeared to follow:

· Macroeconomic policy – fiscal and monetary – was best left to simple, constant and clearly communicated rules, with no role for discretionary stabilisation.

· Deregulation was in general beneficial because it completed more markets and created better incentives.

· Financial innovation was beneficial because it completed more markets, and speculative trading was beneficial because it ensured efficient price discovery, offsetting any temporary divergences from rational equilibrium values.

· And complex and active financial markets, and increased financial intensity, not only improved efficiency but also system stability, since rationally self-interested agents would disperse risk into the hands of those best placed to absorb and manage it.

In other words, all the nuances of the economic theories showing the many limitations of markets seem to have made little progress in getting into the minds of policy makers, thwarted by ideology and the very simple story espoused by the conventional wisdom. Insidiously, the vision of efficient markets so transfixed people that it was assumed that the correct policy prescriptions must be those which would take the system closer to the theoretical ideal (even if that ideal was quite possibly a theorist's fantasy having little to do with real markets), rather than further away from it:

What the dominant conventional wisdom of policymakers therefore reflected was not a belief that the market economy was actually at an Arrow-Debreu nirvana – but the belief that the only legitimate interventions were those which sought to identify and correct the very specific market imperfections preventing the attainment of that nirvana. Transparency to reduce the costs of information gathering was essential: but recognising that information imperfections might be so deep as to be unfixable, and that some forms of trading activity might be socially useless, however transparent, was beyond the ideology...

Turner goes on to argue that the more nuanced views of markets as very fallible systems didn't have much influence mostly because of ideology and, in short, power interests on the part of Wall St., corporations and others benefiting from deregulation and similar policies. I think it is also fair to say that economists as a whole haven't done a very good job of shouting loudly that markets cannot be trusted to know best or that they will only give good outcomes in a restricted set of circumstances.Why haven't there been 10 or so books by prominent economists with titles like "markets are often over-rated"?

But perhaps the most important point he makes is that we shouldn't expect a "theory of everything" to emerge from efforts to go beyond the old conventional wisdom of market efficiency:

...one of the key messages we need to get across is that while good economics can help address specific problems and avoid specific risks, and can help us think through appropriate responses to continually changing problems, good economics is never going to provide the apparently certain, simple and complete answers which the pre-crisis conventional wisdom appeared to. But that message is itself valuable, because it will guard against the danger that in the future, as in the recent past, we sweep aside common sense worries about emerging risks with assurances that a theory proves that everything is OK.

That is indeed a very important message.

The speech goes on to touch on many other topics, all with a fresh and imaginative perspective. Abolish banks? That sounds fairly radical, but it's important to realise that things we take for granted aren't fixed in stone, and may well be the source of problems. And abolishing banks as we know them has been suggested before by prominent people:

Larry Kotlikoff indeed, echoing Irving Fisher, believes that a system of leveraged fractional reserve banks is so inherently unstable that we should abolish banks and instead extend credit to the economy via mutual loan funds, which are essentially banks with 100% equity capital requirements.8 For reasons I have set out elsewhere, I’m not convinced by that extremity of radicalism.9 ... But we do need to ensure that debates on capital and liquidity requirements address the fundamental issues rather than simply choices at the margin. And that requires
economic thinking which goes back to basics and which recognises the importance of specific evolved institutional structures (such as fractional reserve banking), rather than treating existing institutional structures either as neutral pass-throughs in economic models or as facts of life which cannot be changed.

Tuesday, October 25, 2011

From the New York Times (by way of Simon Johnson), a beautiful (and scary) picture of the various debt connections among European nations. (Best to right click and download and then open so you can easily zoom in and out as the picture is mighty big.)

My question is - what happens if the Euro does collapse? Do European nations have well-planned emergency measures to restore the Franc, Deutchmark, Lira and other European currencies quickly? Somehow I'm not feeling reassured.

Monday, October 24, 2011

This is no joke. Studies show that if you examine the genetic material of your typical banker, you'll find that only about 10% of it takes human form. The other 90% is much more slimy and has been proven to be of bacterial origin. That's 9 genes out of 10: bankers are mostly bacteria. Especially Lloyd Blankfein. This is all based on detailed state-of-the-art genetic science, as you can read in this new article in Nature.

OK, I am of course joking. The science shows that we're all like this, not only the bankers. Still, the title of this post is not false. It just leaves something out. Probably not unlike the sales documentation or presentations greasing the wheels of the infamous Goldman Sachs Abacus deals.

Saturday, October 22, 2011

It's encouraging to see that the president of the Federal Reserve Bank of Kansas City has come out arguing that "too big too fail" banks are "fundamentally inconsistent with capitalism." See the speech of Thomas Hoenig. One excerpt:

“How can one firm of relatively small global significance merit a government bailout? How can a single investment bank on Wall Street bring the world to the brink of financial collapse? How can a single insurance company require billions of dollars of public funds to stay solvent and yet continue to operate as a private institution? How can a relatively small country such as Greece hold Europe financially hostage? These are the questions for which I have found no satisfactory answers. That’s because there are none. It is not acceptable to say that these events occurred because they involved systemically important financial institutions.

Because there are no satisfactory answers to these questions, I suggest that the problem with SIFIs is they are fundamentally inconsistent with capitalism. They are inherently destabilizing to global markets and detrimental to world growth. So long as the concept of a SIFI exists, and there are institutions so powerful and considered so important that they require special support and different rules, the future of capitalism is at risk and our market economy is in peril.”

Thursday, October 20, 2011

Take a look at this on the transparency of the Federal Reserve (from Financeaddict) compared to other large nations' central banks. Then watch this, where Timothy Geithner tries very hard to slip sleazily away from any mention of the $13 Billion that went directly from AIG to politically well-connected Goldman Sachs. "Did you have conversations with the AIG counterparties?" Response -- waffle, evade, waffle, stare, mumble. After that, try to tell me that the US is not neck deep in serious political corruption.

Following my second recent post on what moves the markets, two readers posted interesting and noteworthy comments and I'd like to explore them a little. I had presented evidence in the post that many large market movements do not appear to be linked to the sudden arrival of public information in the form of news. Both comments noted that this may leave out of the picture another source of information -- private information brought into the market through the action of traders taking actions:

Anonymous said...

I don't see any mention of what might be called "trading" news, e.g. a large institutional investor or hedge fund reducing significantly its position in a given stock for reasons unrelated to the stock itself - or at least not synchronized with actual news on the underlying. The move can be linked to internal policy, or just a long-term call on the company which timing has little to do with market news, or lags them quite a bit (like an accumulation of bad news leading to a lagged reaction, for instance). These shocks are frequent even on fairly large cap stocks. They also tend to have lingering effect because the exact size of the move is never disclosed by the investor and can spread over long periods of time (i.e. days), which would explain the smaller beta. Yet this would be a case of "quantum correction", both in terms of timing and agent size, rather than a breakdown of the information hypothesis.

Seconding the previous comment, asset price information comes in a lot more forms than simply "news stories about company X." All market actions contains information. Every time a trade occurs there's some finite probability that it's the action of an informed trader. Every time the S&P moves its a piece of information on single stock with non-zero beta. Every time the price of related companies changes it contains new information.

Both of these comments note the possibility that every single trade taking place in the market (or at least many of them) may be revealing some fragment of private information on the part of whoever makes the trade. In principle, it might be such private information hitting the market which causes large movements (the s-jumps described in the work of Joulin and colleagues).

I think there are several things to note in this regard. The first is that, while this is a sensible and plausible idea, it shouldn't be stretched too far. Obviously, if you simply assume that all trades carry information about fundamentals, then the EMH -- interpreted in the sense that "prices move in response to new information about fundamentals" -- essentially becomes true by definition. After all, everyone agrees that trading drives markets. If all trading is assumed to reveal information, then we've simple assumed the truth of the EMH. It's a tautology.

More useful is to treat the idea as a hypothesis requiring further examination. Certainly some trades do reveal private information, as when a hedge fund suddenly buys X and sells Y, reflecting a belief based on research that Y is temporarily overvalued relative to X. Equally, some trades (as mentioned in the first comment) may reveal no information, simply being carried out for reasons having nothing to do with the value of the underlying stock. As there's no independent way -- that I know of -- to determine if a trade reveals new information or not, we're stuck with a hypothesis we cannot test.

But some research has tried to examine the matter from another angle. Again, consider large price movements -- those in the fat-tailed end of the return distribution. One proposed idea looking to private information as a cause proposes that large price movements are caused primarily by large-volume trades by big players such as hedge funds, mutual funds and the like. Some such trades might reveal new information, and some might not, but let's assume for now that most do. In a paper in Nature in 2003, Xavier Gabaix and colleagues argued that you can explain the precise form of the power law tail for the distribution of market returns -- it has an exponent very close to 3 -- from data showing that the size distribution of mutual funds follows a similar power law with an exponent of 1.05. A key assumption in their analysis is that the price impact Δp generated by a trade of volume V is roughly equal to Δp = kV1/2.

This point of view seems to support the idea that the arrival of new private information, expressed in large trades, might account for the no-news s jumps noted in the Jouvin study. (It seems less plausible that such revealed information might account for anything as violent as the 1987 crash, or the general meltdown of 2008). But taken at face value, these arguments at least seem to be consistent with the EMH view that even many large market movements reflect changes in fundamentals. But again, this assumes that all or at least most large volume trades are driven by private information on fundamentals, which may not be the case. The authors of this study themselves don't make any claim about whether large volume trades really reflect fundamental information. Rather, they note that...

Such a theory where large individual participants move the market is consistent with the evidence that stock market movements are difficult to explain with changes in fundamental values...

But more recent research (here and here, for example) suggest that this explanation doesn't quite hang together because the assumed relationship between large returns and large volume trades isn't correct. This analysis is fairly technical, but is based on the study of minute-by-minute NASDAQ trading and shows that, if you consider only extreme returns or extreme volumes, there is no general correlation between returns and volumes. The correlation assumed in the earlier study may be roughly correct on average, but it not true for extreme events. "Large jumps," the authors conclude, "are not induced by large trading volumes."

Indeed, as the authors of these latter studies point out, people who have valuable private information don't want it to be revealed immediately in one large lump because of the adverse market impact this entails (forcing prices to move against them). A well-known paper by Albert Kyle from 1985 showed how an informed trader with valuable private information, trading optimally, can hide his or her trading in the background of noisy, uninformed trading, supposing it exists. That may be rather too much to believe in practice, but large trades do routinely get broken up and executed as many small trades precisely to minimize impact.

All in all, then, it seems we're left with the conclusion that public or private news does account for some large price movements, but cannot plausibly account for all of them. There are other factors. The important thing, again, is to consider what this means for the most meaningful sense of the EMH, which I take to be the view that market prices reflect fundamental values fairly accurately (because they have absorbed all relevant information and processed it correctly). The evidence suggests that prices often move quite dramatically on the basis of no new information, and that prices may be driven as a result quite far from fundamental values.

The latter papers do propose another mechanism as the driver of routine large market movements. This is a more mechanical process centering on the natural dynamics of orders in the order book. I'll explore this in detail some other time. For now, just a taster from this paper, which describes the key idea:

So what is left to explain the seemingly spontaneous large price jumps? We believe that the explanation comes from the fact that markets, even when they are ‘liquid’, operate in a regime of vanishing liquidity, and therefore are in a self-organized critical state [31]. On electronic markets, the total volume available in the order book is, at any instant of time, a tiny fraction of the stock capitalisation, say 10−5 −10−4 (see e.g. [15]). Liquidity providers take the risk of being “picked off”, i.e. selling just before a big upwards move or vice versa, and therefore place limit orders quite cautiously, and tend to cancel these orders as soon as uncertainty signals appear. Such signals may simply be due to natural fluctuations in the order flow, which may lead, in some cases, to a catastrophic decay in liquidity, and therefore price jumps. There is indeed evidence that large price jumps are due to local liquidity dry outs.

Search This Blog

This blogexplores the potential for the transformation of economics and finance through the inspiration of physics and the other natural sciences. If traditional economics has emphasized self-regulation and market equilibrium, the new perspective emphasizes the myriad positive feed backs that often drive markets away from equilibrium and cause tumultuous crashes and other crises. Read more about the idea.

Who am I?

Physicist and science writer. I was formerly an editor with the international science journal Nature and also the magazine New Scientist. I am the author of three earlier books, and have written extensively for publications including Nature, Science, the New York Times, Wired and the Harvard Business Review. I currently write monthly columns for Nature Physics and for Bloomberg Views.