Wednesday, February 15, 2012

A staggering amount of effort has been spent -- and wasted -- exploring the idea of market efficiency. The notoriously malleable efficient markets hypothesis (EMH) claims (in its weakest form) that markets are "information efficient" -- market movements are unpredictable because smart investors keep them that way. They should quickly -- even, "instantaneously" in some statements -- pounce on any predictable pattern in the market, and by profiting will act to wipe out that pattern.

I've written too many times (here, here, here, for example) about the masses of evidence against this idea. It's not that predictable patterns don't attract investors who often act in ways that tend to wipe out those patterns through arbitrage. Part of the problem is that investors often act in ways that amplify the pattern (following trends, for example). Moreover, there are fundamental limits to arbitrage -- "the markets can stay irrational longer than you can stay solvent." Still, the EMH stumbles onward like a zombie -- dead, proven incorrect and misleading, yet still taking center place in the way many people think of markets.

I found an illuminating new perspective on the matter in this recent paper by Doyne Farmer and Spyros Skouras, which explore analogies between finance and ecology. This analogy is itself deeply suggestive. They note, for example, how the interactions between hedge funds can be useful viewed in ecological terms -- funds sometimes act as direct competitors (the profits of one reducing opportunities for another), and in other cases as predator and prey or as symbiotic partners. But I want to look specifically at an effort they make to give a rough estimate of the timescale over which the actions of sophisticated arbitragers might reasonably be expected to wipe out a new predictable pattern in the market. That is, if the market for whatever reason is temporarily inefficient -- showing a predictable pattern -- how quickly should it be returned to efficiency? How long is the time to relaxation that the EMH claims is "instantaneous" or close to it?

The gist of their idea is very simple. Before you can exploit a predictable pattern, you first have to identify it. If you're going to invest money trading against it, you need to be fairly sure you've identified a real pattern, not just a statistical fluke. If you're going to invest somebody else's money, you have to convince them. This takes some time. The stronger the pattern, the more it stands out and the less time it should take to be sure. Weaker signals will be hidden by more noise, and reliable identification will take longer. Looking at how much time it should take to get good statistics should give an order of magnitude of how long a pattern should persist before any smart investor can begin reliably trading against it and (perhaps) erasing it.

Here's the specific argument, expressed using the Sharpe ratio (ratio of expected return to standard deviation of a strategy exploiting the pattern):

This makes obvious intuitive sense. If S is very large, making the pattern obvious, more deterministic and easier to exploit, then the time over which it might be expected to vanish is smaller. Truly obvious patterns can be expected to vanish quickly. But if S is small, the timescale for identification and exploitation grows.

As Farmer and Skouras note, successful investment strategies often have Sharpe ratios of about S = 1, so this gives a result of about 10 years. [This is the result if one makes the analysis on an annual timescale, with the Sharpe ratio calculated on a yearly basis. If we're talking about fast algorithmic trading, then the analysis takes place on a shorter timescale.]

So, 10 years is the order of magnitude estimate -- which is a rather peculiar interpretation of the word "instantaneous." Perhaps that word should be replaced in the EMH with "very slowly," although that somewhat dampens the appeal of the idea: "The EMH asserts that sophisticated investors will very slowly identify and exploit any inefficiencies in the market, tending the erase those inefficiencies over a few decades or so." Given that new inefficiencies can be expected the arise in the mean time, you might as well call this more plausible hypothesis the PIMH: the perpetually inefficient markets hypothesis.

And their estimate, Farmer and Skouras point out, is actually optimistic:

We should stress that this estimate is based on idealized assumptions, such as log-normal returns – heavy tails, autocorrelations, and other effects will tend to make the timescale even longer.... As a given inefficiency is exploited, it will become weaker and the Sharpe ratio of investment strategies associated with it drops. As the Sharpe ratio becomes smaller the fluctuations in its returns become bigger, which can generate uncertainty about whether or not the strategy is still viable. This slows down the approach to inefficiency even more.

Of course, as I mentioned above, this analysis depends on timescale. Take t in years and we're thinking about predictable patterns emerging on the usual investment horizon of a year or longer, patterns exploited by hedge funds and mutual funds of the more traditional (not high frequency) kind. Here we see that the time to expect predictable patterns to be wiped out is very long indeed. If 10 years is the order of magnitude, then it's likely some of these patterns persist for several decades -- getting up to the time of a typical investing career. Hardcore supporters of the EMH should learn to speak more honestly: "We have every reason to expect that predictable market inefficiencies should be wiped out fairly quickly, at least on the timescale of a human investment career."

All in all, this way of estimating the time for relaxation back to the "efficient equilibrium" suggests that the relaxation is anything but fast, and often very slow. The EMH may be right that there likely aren't any obvious patterns, but more subtle predictable patterns will likely persist for long periods of time, even while they present real profit opportunities. The market is not in equilibrium. And with no mechanism to prevent them, new predictable patterns and "inefficiencies" should be emerging all the time.

Tuesday, February 14, 2012

Numerian, one of the most insightful financial bloggers I know, gets to the bottom of the markets' surge in the wake of the Greek passage of the new "austerity" measures:

For now, stock markets are happy because they get their periodic injection of heroin – oops, make that “liquidity” – to keep the game going. The game is the one we have all lived through our whole lives – the one where capitalism continues to grow by taking on more and more debt, until now every country is at the point where only the government is big enough to take on the enormous amounts of new debt necessary to keep paying principal and interest on the old debt. At least some countries are: the United States, the UK, Germany, France. Greece of course lost that privilege several years ago, and now even big borrowers like Italy are allowed into the markets only for very short term maturities.

The game, in short, is about over, choking to death on too much debt, kept on a resuscitator by politicians and central bankers who know the public has no way to stop them from raising trillions of dollars or euros with new bond issues. Only the market can stop an out-of-control debtor. Greece has found that out, Italy and Spain and Portugal are close to finding that out, and the UK and the United States are on the list, being no more virtuous than the Greeks. Once the government can no longer borrow, default in some form is inevitable, and austerity follows. The Greeks have austerity handed to them by the Germans; everyone else will be able to choose their own forms of austerity, as different economic and social forces fight with each other in a country that has run out of borrowing capacity and must live off the taxes it is able to raise.

If the stock market had a long term view, it would think about these things. It would look at Greece as a combination horror story and warning sign. Instead, the stock market lives for the day only, and for now the debt binge continues, and the fix of easy credit is being pumped into the financial system once more. Let the celebrations continue.

Monday, February 13, 2012

In a new paper on trends in high-frequency trading, Neil Johnson and colleagues note that:

... a new dedicated transatlantic cable is being built just to shave 5 milliseconds off transatlantic communication times between US and UK traders, while a new purpose-built chip iX-eCute is being launched which prepares trades in 740 nanoseconds ...

This just illustrates the technological arms race underway as firms try to out-compete each other to gain an edge through speed. None of the players in this market worries too much about what this arms race might mean for the longer term systemic stability of market; it's just race ahead and hope for the best. I've written before (here and here) about some analyses (notably from Andrew Haldane of the Bank of England) suggesting that this race is generally increasing market volatility and will likely lead to disaster in one form or another.

We may be getting close. If Johnson and his colleagues are correct, the markets are already showing signs of having already made a transition into a machine-dominated phase in which humans have little control.

Most readers here will probably know about futurist Ray Kurzweil's prediction of the approaching "singularity" -- the idea that as our technology becomes increasingly intelligent it will at some point create self-sustaining positive feedback loops that drive explosively faster science and development leading to a kind of super-intelligence residing in machines. Humans will be out of the loop and left behind. Given that so much of the future vision of computing now centers on bio-inspired computing -- computers that operate more along the lines of living organisms, being able do things like self-repair and adaptation, true reproduction, etc, -- it's easy to imagine this super-intelligence again ultimately being strongly biological in form, while also exploiting technologies that earlier evolution was unable to harness (superconductivity, quantum computing, etc.). In that case -- again, if you believe this conjecture has some merit -- it could turn out ironically that all of our computing technology will act as a kind of mid-wife aiding a transition from Homo sapiens to some future non-human but super-intelligent species.

But forget that. Think singularity, but in the smaller world of the markets. Johnson and his colleagues ask the question of whether today's high-frequency markets are moving toward a boundary of speed where human intervention and control is effectively impossible:

The downside of society’s continuing drive toward larger, faster, and more interconnected socio-technical systems such as global financial markets, is that future catastrophes may be less easy to forsee and manage -- as witnessed by the recent emergence of financial flash-crashes. In traditional human-machine systems, real-time human intervention may be possible if the undesired changes occur within typical human reaction times. However,... in many areas of human activity, the quickest that someone can notice such a cue and physically react, is approximately 1000 milliseconds (1 second)

Obviously, most trading now happens much faster than this. Is this worrying? With the authors, let's look at the data.

In the period from 2006-2011, they found (looking at many stocks on multiple exchanges) that there were about 18,500 specific episodes in which markets, in less than 1.5 seconds, either 1. ticked down at least 10 times in a row, dropping by more than 0.8% or 2. ticked up at least 10 times in a row, rising by more than 0.8%. The figure below shows two typical events, a crash and a spike (upward), both lasting only 25 ms.

Apparently, these very brief and momentary downward crashes or upward spikes -- the authors refer to them as "fractures" or "Black Swan events" -- are about equally likely. And they become more likely as one goes to shorter time intervals:

... our data set shows a far greater tendency for these financial fractures to occur, within a given duration time-window, as we move to smaller timescales, e.g. 100-200ms has approximately ten times more than 900-1000ms.

But they also find something much more significant. They studied the distribution of these events by size, and considered if this distribution changes when looking at events taking place on different timescales. The data suggests that it does. For times above about 0.8 seconds or so, the distribution closely fits a power law, in agreement with countless other studies of market returns on times of one second or longer. For times shorter than about 0.8 seconds, the distribution begins to depart from the power law form. (It's NOT that it becomes more Gaussian, but it does become something else that is not a power law.) The conclusion is that something significant happens in the market when we reach times going below 1 second -- roughly the timescale of human action.

Ok. Now for the punchline -- an effort to understand how this transition might happen. In my last blog post I wrote about the Minority Game -- a simple model of a market in which adaptive agents attempt to profit by using a variety of different strategies. It reproduces the realistic statistics of real markets, despite its simplicity. I expect that some people may wonder if this model can really be useful in exploring real markets. If so, this new work by Johnson and colleagues offers a powerful example of how valuable the minority game can be in action.

Their hypothesis is that the observed transition in market dynamics below one second reflects "a new fundamental transition from a mixed phase of humans and machines, in which humans have time to assess information and act, to an ultrafast all-machine phase in which machines dictate price changes." They explore this in a model that...

...considers an ecology of N heterogenous agents (machines and/or humans) who repeatedly compete to win in a competition for limited resources. Each agent possesses s > 1 strategies. An agent only participates if it has a strategy that has performed sufficiently well in the recent past. It uses its best strategy at a given timestep. The agents sit watching a common source of information, e.g. recent price movements encoded as a bit-string of length M, and act on potentially profitable patterns they observe.

This is just the minority game as I described it a few days ago. One of the truly significant lessons emerging from its study is that we should expect markets to have two fundamentally distinct phases of dynamics depending on the parameter α=P/N, where P is the number of different past histories the agents can perceive, and N is the number of agents in the game. [P=2M if the agents use bit strings of length M in forming their strategies]. If α is small, then there are lots of players relative to the number of different market histories they can perceive. If α is big, then there are many different possible histories relative to only a few people. These two extremes lead to very different market behaviour.

Johnson and colleagues suggests that the transition between these regimes is just what shows up in the statistics around the one second threshold. They first argue that the regime for large α (many strategies per agent) should be associated with the trading regime above one second, where both people and machines take part. Why? As they suggest,

We associate this regime (see Fig. 3) with a market in which both humans and machines are dictating prices, and hence timescales above the transition (>1s), for these reasons: The presence of humans actively trading -- and hence their ‘free will’ together with the myriad ways in which they can manually override algorithms -- means that the effective number (i.e. α > 1). Moreover α > 1 implies m is large, hence there are more pieces of information available which suggests longer timescales... in this α > 1 regime, the average number of agents per strategy is less than 1, hence any crowding effects due to agents coincidentally using the same strategy will be small. This lack of crowding leads our model to predict that any large price movements arising for α > 1 will be rare and take place over a longer duration – exactly as observed in our data for timescales above 1000ms. Indeed, our model’s price output (e.g. Fig. 3, right-hand panel) reproduces the stylized facts associated with financial markets over longer timescales, including a power-law distribution.

What they're getting at here is that crowding in the space of strategies, by creating strong correlations in the strategies of different agents, should tend to make large market movements more likely. After all, if lots of agents come to use the very same strategy, they will all trade the same way at the same time. In this regime above one second, with humans and machine, they suggests there shouldn't be much crowding; the dynamics here do give a power law distribution of movements, but it is what is found in all markets in this regime.

In contrast, they suggest that the sub one second regime should be associated the the α < 1 phase of the minority game:

Our association of the α < 1 regime with an all-machine phase is consistent with the fact that trading algorithms in the sub-second regime need to be executable extremely quickly and hence be relatively simple, without calling on much memory concerning past information: α < 1 regime with an all-machine phase is consistent with the fact that trading algorithms in the sub-second regime need to be executable extremely quickly and hence be relatively simple, without calling on much memory concerning past information: Hence M will be small, so the total number of strategies will be small and therefore... α < 1. Our model also predicts that the size distribution for the black swans in this ultrafast regime (α < 1) should not have a power law since changes of all sizes do not appear – this is again consistent with the results in Fig. 2.

The authors go on to quantify this transition in a little more detail. In particular, they calculate in the simple minority game model the standard deviation of the price fluctuations. In the regime α < 1 this turns out to be roughly proportional to the number N of agents in the market. In contrast, it goes in proportion only to the square root of N in the α > 1 regime. Hence, the model predicts a sharp increase in the size of market fluctuations when entering the machine dominated phase below one second.

The paper as a whole takes a bit of time to get your head around, but it is, I think, a beautiful example of how a simple model that explores some of the rich dynamics of how strategies interact in a market can give rise to some deep insights. The analysis suggests, first, that the high frequency markets have moved past "the singularity," their dynamics having become fundamentally different -- uncoupled from the control, or at least strong influence, of human trading. It also suggests, second, that the change in dynamics derives directly from the crowding of strategies that operate on very short timescales, this crowding caused by the need for relative simplicity in these strategies.

This kind of analysis really should have some bearing on the consideration of potential new regulations on HFT. But that's another big topic. Quite aside from practical matters, the paper shows how valuable perspectives and toy models like the minority game might be.

Monday, February 6, 2012

I have an essay just published in Bloomberg. The essay is quite short and I wanted to give interested readers a few more details and links to further reading on the subject: how to build simple yet plausible models of financial markets. Hence, this post.

Traditional models of markets in economics work from the idea of equilibrium. What happens in the market is assumed to reflect the interplay of two things: 1. the decisions of countless market participants who act more or less rationally in their best interests, their actions collectively bringing the market toward a balance in which stocks, bonds, or other instruments take their realistic "fundamental" values (or at least something close to those values), and 2. external shocks that hit the market in the form of new pieces of information concerning businesses, political events, changes in regulations, technological discoveries and so on. Without shocks from outside, these models imply, the market would settle down (very rapidly it is assumed) into an unchanging state. Everything that happens in the market, this perspective asserts, can be traced to new information perturbing that balance.

I've written before (here, for example, and here) about the considerable evidence against this picture. This includes many large market movements that occur in the absence of any apparent "shock" to the market, and "excess volatility" -- a general tendency for market prices to move more than they should by any rational reckoning of their true value. Many studies have established strong mathematical regularities in market fluctuations that reflect this excess volatility, including the fat tailed statistics of market returns and the long memory of returns -- the way markets absorb information, empirically, doesn't seem to be fast at all, but involves a long slow process of digestion stretching over a decade and more.

So, how to build models that go beyond this equilibrium picture? If the simple notion of equilibrium is too simple to explain market reality, why not non-equilibrium models? That is, models of markets in which the individuals' actions, even in the absence of outside shocks, never lead the market to settle down into some equilibrium balance. Rather, people might instead perpetually change their ideas and strategies, interacting in a way that gives the market a rich internal dynamics, much as the weather. Studies of very simple two player strategic games actually show that this kind of outcome should be expected in any case in which the decisions facing individuals are quite complex and not open to simple rational solution -- that is, in other words, in most cases (and obviously in the case of any real world market).

Science often makes progress by posing toy models that capture the logical essence of some problem. By struggling with it, people learn new ways of thinking. In my Bloomberg column I touched on one of the toy models that has recently inspired non-equilibrium models of markets. This is the so-called El Farol Bar problem invented by economist Brian Arthur in1994. The original paper is still well worth a read; it's one of those papers that puts forth an idea so sound and plausible that it seems obvious, and it is hard to believe that its ideas were revolutionary and against everything in mainstream economic habit. To this day, this remains true. But enough background.

My Bloomberg column gives a rough introduction to the idea of the bar problem -- it is an archetype of a situation in which it is not possible for people to make decisions through rational deliberation, but rather have to take a more practical approach, forming hypotheses and theories, and learning from experience. The result is a dis-equilibrium situation that continues to evolve in time, never settling into an equilibrium. There are plenty of articles giving more detailed synopses of the model and how it works.

But the essence of the bar model has since then been explored in much greater detail through another model -- the so-called minority game. It's just the bar model stripped down to be as simple as possible while still preserving the essential element that no one in the game can "figure it out." There is no strictly rational way to play; intelligent play requires perpetual adaptation and learning. The minority game is probably the simplest mathematical model of a market in this sense. It is, of course, too simple to be anything more than a toy model. But it is, at least, a toy model looks a lot more like real markets than does the EMH or anything else in economics. And the benefit of its relative simplicity is that it is possible to see how various fundamental factors influence how it works, without thousands of other potential complications that obscure matters.

So --what is the minority game?

Arthur assumed that people would enjoy the bar is 60% of less attended, and would be miserable if more than 60% attended. The minority game simplifies the cutoff to 50%, giving symmetry to the problem -- now the people who do well on any week are those who act so as to be in the minority (going when most others stay home, and vice versa). The minority game also dispenses with the story of the bar. Just let each N agents on each play choose either 1 or -1, with those in the minority gaining some payoff (let N be odd). Now let them play many times. Each agent just goes along trying each time to be in the minority. How does one do that?

Of course, everything hinges on how we suppose the agents play. Following Arthur, the typical approach is to suppose that they use some kind of trial and error learning. The record of outcomes of the game is given by some sequence of 1s and -1s -- these giving the winning choices that would have been in the minority in each instance. For example, the last 5 outcomes might be 1,1,-1,-1,1. A "strategy" in the minority game associates every such recent history with a choice of what to do next -- a choice of 1 or -1. A strategy may say that IF (1,1,-1,-1,1) THEN choose 1, or instead IF (1,1,-1,-1,1) THEN choose -1. If we suppose that agents look back to the past M outcomes in trying to forecast the future (this is often referred to as their "brain size"), then there are P=2M different histories that the agents can resolve or discern. A strategy, mathematically speaking, is a map or association that assigns a choice 1 or -1 to each and every one of the possible P=2M different histories. A strategy is a plan for action, if you will, an IF...THEN statement of what to do for every possible sequence of happenings (at the level the agents can perceive).

Now, since each possible sequence can be associated with 1 or -1, there are actually 2P or 22M different possible strategies. Even with M as low as 5, this is already 232 which is well over one trillion different possible strategies. It grows extremely fast as M gets bigger.

The way the minority game is played is that, at the outset, each one of the players gets assigned a number S of these strategies at random. You can think of this as their intellectual inheritance. It's a collection of S different "ways to think" about how to predict the world. (The interesting properties of the model do not depend on this choice.) Agents use these strategies to play the game, and keep track of which strategies seem to work well, and which ones don't. This is generally done through some simple algorithm wherein each strategy in an agent's bag gains a point when it makes a good prediction, and gets docked a point if it predicts incorrectly. On each play, the agents look into their bag and play the strategy with the highest score. This is like a person mulling over the various theories in their head, and using the one that seems to have been most successful in the past.

So that's it. You have a collection of agents using simple learning dynamics to try to predict the future based on the past. They try at each moment to choose what most others will not choose. Clearly no one strategy can emerge as the winner, because if it did and everyone started using it then they would all be in the majority, and all be losers. Successful strategies, therefore, sow the seeds of their own demise. This is what makes the game interesting. But wait -- how is this linked to markets?

Qualitatively, the minority game is a little like a market. It's a situation in which each person tries to profit by choosing the right strategy, but where what is "right" is determined by the collective actions of everyone together. There is no "right" independent of what others do. The minority game, like any market, is a game involving an enormous and complex ecology of interacting strategies. Except there is no price. A minor addition can change that, however.

We get the simplest crude model of a market if we think of those choosing 1 as "buyers" and those choosing -1 as "sellers" of some stock. Naturally, more buyers than sellers drives a price up, and an imbalance the other way drives it down. Mathematically, you can take the change in the logarithm of the price (the return) at time t as being proportional to Number(buyers) - Number(sellers). The resulting price has a highly erratic behaviour that very roughly resembles the movements of financial markets. In particular, even without any external information hitting the market, no shocks from outside or perturbations of any kind, the price fluctuates up and down unpredictably merely through the perpetual evolution of the agents' trading strategies. One day, apparently caused by nothing, the price may suddenly drop by 10% -- all because many agents chose to sell just as a part of trying to predict what was about to happen next.

This post has already gone on long enough. For lots of detail, I highly recommend this excellent review of the minority game by Tobias Galla, Giancarlo Mosetti and Yi-Cheng Zhang, which I highly recommend (Zhang is one of the co-inventors of the minority game). There is a great deal more to say, but two points are most important for now:

1. This simplest version of a minority-game market does NOT give rise to the so-called "stylized facts" of real markets -- the fat tailed statistics of return distributions, and the long memory and persistence of market volatility. In fact, the returns for this simplest minority game market are Gaussian, at least when the number of players is large. However, this model is only a beginning, and these stylized facts emerge quite naturally as soon as one includes a few more realistic features. For example, in this version of the game, every agent must either choose 1 (buy) or -1 (sell) at each moment. Hence, the volume of trading is always the same; it is forced to be fixed, which is of course completely unnatural.

An easy way to relax this is to give the agents the ability also to abstain -- they might just hold what they have, neither buying or selling. This can be included with some measure of confidence, whereby an agent doesn't trade unless one of its strategies has been very successful in the recent past. This immediately leads to a market with fluctuations in volume, and this market does turn out to have stylized facts much like those of real markets, with fat tailed returns and long memory. Another way to allow for a fluctuating volume is to let the agents accrue wealth over time, and let the volume of their trading grow in proportion to that wealth. Again, this is an obvious step toward reality, and again leads to market fluctuations with many of the rich statistical features seen in real markets, even in such simple models.

2. The other important point is that the minority game is so simple that it can be solved with some analytical techniques borrowed from physics. The details aren't so important, but the results show a fascinating behaviour in the market with a kind of phase transition separating two distinct regimes of behaviour. The picture that emerges gives a beautiful coarse grain understanding of markets as ecologies of interacting agents.

The basic story is that one key parameter in the minority game determines its overall character. The parameter is α=P/N, where N is the number of agents in the game. If this is small, then there are lots of players relative to the number of different market histories they can perceive. If this is big, then there are many different possible histories relative to only a few people. These two extremes lead to very different market behaviour and it's not hard to see why.

The figure below (from the Galla et al paper) shows what is effectively the market predictability as a function of this parameter α=P/N. In other words, this shows how much a past record of prices can be used to make accurate predictions of future prices. For large α, this predictability is positive, and it grows as α grows. Why? In principle, this kind of predictability ought to make for easy success in the game. But only if the agents' have enough flexibility in their behaviour to pounce on the pattern and exploit it. In this regime, it happens frequently that patterns emerge in the prices and yet no agent has the intellectual capacity to exploit it. Recall that each individual is given a random set of strategies to work with. With very few people in the market, that means very few strategies relative to the many possible market histories on which the agents make their decisions. In short, if the diversity of agents and their behavioural strategies is insufficient to let them jump on all possible patterns, the market retains a degree of predictability.

For the opposite limit, α small, things look very different. Now we have an oversupply of agents and strategies in play relative to the number of discernible histories. Now it is very likely that any predictable pattern will suit some agent's bag of strategies, and that agent will jump on it, profit, and their activity will tend to wipe out that pattern. This phase looks rather like the efficient market -- such a rich diversity of agents with different skills and mindsets that any predictable patterns are immediately washed away.

One of the most interesting conjectures emerging from this picture is that real markets are probably something like "marginally efficient", at least much of the time. If fully efficient and hence unpredictable (many agents in the market, low α), then it's very hard to make profits in the market. This should make the market less attractive and some people should leave to do something else. But as the number of people falls, this pushed the market toward higher α and into the inefficient phase in which the market becomes predictable. This should in turn attract more people into the market to profit from easy patterns. In this simple picture, the market should tend to evolved toward a value of α where the predictability of prices is low but not zero, making it hard but not impossible to find patterns in the market. This is, judging in a loose non-scientific way, just about where markets often seem to be.

Works on the minority game now number on the thousands, and I only wanted to get across the most basic points. It's a beautiful simple model that captures a great deal of the qualitative character of markets, and with further changes can be taken step by step toward more sophisticated models of markets. There's no reason to stick to idea that payoff go to those in the minority. The game has generalized, for example, to majority type games (where trend following pays off) or mixed majority-minority games (where market reversion and trend following both play a role). A few steps further and you can forget the game itself and simply model markets as systems in which people interact with various trading strategies, each buying and selling, and where prices move through their collective action. Lots of models study a simple mixture of fundamentalists, who try to trade on the base of real information about stocks, and trend followers who ride the momentum. Computing power and imagination present the only limits.

I'm going to explore some of these more sophisticated models in future, but the minority game remains extremely important precisely because of its simplicity. That simplicity allows full analysis leading to the picture of two phases shown above. But this is already considerably more complex than market models based on assumed fully rational behaviour. It's crude, very much so, but a small step in the right direction.

Twitter follow

Search This Blog

Loading...

This blogexplores the potential for the transformation of economics and finance through the inspiration of physics and the other natural sciences. If traditional economics has emphasized self-regulation and market equilibrium, the new perspective emphasizes the myriad positive feed backs that often drive markets away from equilibrium and cause tumultuous crashes and other crises. Read more about the idea.

Who am I?

Physicist and science writer. I was formerly an editor with the international science journal Nature and also the magazine New Scientist. I am the author of three earlier books, and have written extensively for publications including Nature, Science, the New York Times, Wired and the Harvard Business Review. I currently write monthly columns for Nature Physics and for Bloomberg Views.