The Global Economic Observatory

On September of last year, Dave Lauer—a high frequency trader turned whistleblower—stated before a Congressional subcommittee in Washington that due to the technological arms race being waged on Wall Street, the “U.S. equity markets are [now] in dire straits” and that “we are truly in a crisis.”

Taking his case before Congress, the previous senior quant for one of the largest HFT firms explains how the “unpredictable and disturbing behavior” of algorithms interacting at speeds we can’t comprehend has led our markets to become “unwieldy, overly complex, and extremely fragile”—a situation that demands not just incremental change, but radical technological reform.

As he explains:

[The stock market] is subject to manipulation, whether nefarious or accidental, on a daily basis... Technological mayhem is more frequent and likely to increase...It is simply a matter of time before we have another catastrophe of the same magnitude or worse than the Flash Crash. The next time it happens, we may not be so fortunate with regard to the timing – it was only luck that the Flash Crash didn’t start in the morning, inciting markets around the world to crash, or at 3:45pm EST, with the market closing after the drop, but before it could recover. If this were to happen, there would be an overnight exodus from the market with disastrous consequences for the US economy.

Given the above, Lauer listed a large number of admittedly unpopular changes (by the HFT community, at least) to be implemented post-haste. I shall not list them all here as you can read them for yourself (see link), however, there was one in particular that seemed unthinkable only a short time ago.

Towards the end of his testimony, Lauer adds: “The SEC should also consider implementing a market-wide surveillance mechanism.” Either they took him seriously or were already in the process of developing this very thing, for shortly thereafter the SEC announced their version of such a system (read more about SEC’s MIDAS here).

Later, in an email, he clarified what this should look like:

I talk about a market-wide surveillance system that would enable adaptive, dynamic kill switches. These would be able to adapt to both market conditions and be very selective, able to kill on a per-strategy basis [using]...artificial intelligence and machine learning techniques...

Recently, MIT professors Andrei Kirilenko and Andrew Lo wrote a paper titled, “Moore’s Law vs. Murphy’s Law”, where they argue how technology now clearly supersedes human intelligence, acting in ways beyond our comprehension and making it impossible for humans to regulate the markets effectively:

[A]utomation and increasingly higher transaction speeds make it nearly impossible for humans to provide effective layers of risk management and nuanced judgment in a live trading environment. To be perceived as effective, regulation must operate at the same speed as the activities it oversees...

The only thing that can operate as fast as the machines trading on Wall Street, of course, is another machine. As Lauer would say, a highly adaptive surveillance system using artificial intelligence and machine learning.

Question is, would such a system complement or replace human intelligence in regulating our financial markets? On this point, Lo and Kirilenko comment:

[H]umans have been pushed to the periphery of a much faster, larger, and more complex trading environment...

[The] financial markets have become so complex that no individual or group of individuals is capable of conceptualizing all possible interactions that could occur...

This is no joke. As I reported last year, in the wake of the 2008 financial crisis, the financial system has become so interconnected that derivatives, for example, are now beginning to operate at a level of mathematical complexity associated with quantum physics—specifically, a field known as "Quantum Chromodynamics".

Given this new level of global interconnectivity, Lo and Kirilenko finally agree with Lauer by writing:

[T]he growing interconnectedness of financial markets and institutions has created a new form of accident: a systemic event, where the “system” now extends beyond any single organization or market... Therefore, Financial Regulation 2.0 necessarily involves system-wide supervision and regulation.

So, what can we conclude from the above?

To summarize, it is apparent to both experts in the field and many others that we have crossed a significant threshold in society where machine intelligence has clearly outpaced us in one of the most vital and sensitive networks we’ve ever created: the financial system.

Furthermore, given that it is “nearly impossible”, as Lo and Kirilenko say, for humans to regulate something faster, larger, and more complex than our minds can fathom, the only way to prevent machines or technology from getting out of control is to leverage the power of a much faster, larger, and more "intelligent" machine.

Since the financial system also now “extends beyond any single organization or market”, if this automated regulator is to succeed it must, of course, have immense reach and power. The question is, will our highly interconnected and technologically-dominated markets fall prey to Murphy’s or Moore’s Law first?

A new Bretton Woods is in our future, a moment when chastened global leaders will commit to building a new institutional order. But it will take another, larger economic crisis before the collective will to do so is found. In the meantime, we must immediately undertake another equally important task: We need to create a global economic observatory, an entity capable of collecting and digesting the data needed to truly understand the global economy in all its shifting complexity.

Lauer, Lo, and Kirilenko might argue that we need artificial intelligence to regulate the markets—perhaps the entire global financial system—but Saffo, who also sits on the faculty of the Singularity University, doesn’t think we should stop there. Given the inability of our limited minds to "understand the global economy in all its shifting complexity", why not leverage artificial intelligence toward the international monetary system to help govern the entire global economy? In this case, Skynet looks less like a hostile takeover and more like a merger.

By the way, if you think this has now gone too far, I invite you to consider my last piece, Banking Big on the Singularity, where I pointed out how the prominent research publication, Bank Credit Analyst—read by major institutions, central banks, and governments around the world—has now considered the prospect of superintelligence in machines, or even in modified humans, worthy of investment consideration.

It’s easy to dismiss the notion of ‘self-aware’ machines, superhuman AI, or the so-called “Singularity” perhaps because we, along with those who predict such things, have so many misconceptions about what’s taking place. However, whether we choose to consider them or not, the very physical impact of replacing human with machine intelligence is all around us.

[R]obotic weapons eventually will make kill decisions on the battlefield with no more than a veneer of human control. Full lethal autonomy is no mere next step in military strategy: It will be the crossing of a moral Rubicon. Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any object...

Recently the military verbiage has shifted from humans remaining "in the loop" regarding kill decisions, to "on the loop." The next technological steps will put soldiers "out of the loop," since the human mind cannot function rapidly enough to process the data streams that computers digest instantaneously to provide tactical recommendations and coordinate with related systems.

Once again, we come up with a familiar theme: the human mind doesn't cut it anymore. Therefore, whether it’s in finance, the factory floor, or now in war, just exactly where and how far technology will go is unclear.

In trying to see how this will play out, cosmologist and MIT professor Max Tegmark asks:

So what will actually happen? This is something we should be really worried about. The industrial revolution has brought us machines that are stronger than us. The information revolution has brought us machines that are smarter than us in certain limited ways, beating us in chess in 2006, in the quiz show "Jeopardy!" in 2011, and at driving in 2012, when a computer was licensed to drive cars in Nevada after being judged safer than a human. Will computers eventually beat us at all tasks, developing superhuman intelligence?

Most, I think, would say no. However, this is largely due to a narrow and antiquated definition of what a computer is. Previously a computer was thought of as a stand-alone device that sat under your desk. Today, a computer is part of a distributed network that may or may not consist of much hardware at all. In fact, each year the hardware becomes smaller and smaller, fitting into the palms of our hands or now as glasses. Computation, we see, now occurs largely in the cloud; with the computer itself becoming an evolving wireless communications network.

We should therefore not think of machines or computers as becoming self-aware or superintelligent, but of society itself gaining awareness at a higher scale through technology, becoming part of a massive real-time computation. We are rapidly integrating our lives with technology that predicts our desires and future actions (think Facebook and other sites collecting data and analyzing your movements in order to maximize advertising profits.) Algorithms are increasingly looking at what you, I, and the collective think, feel, and act—using that data to create complex models of the future. Instead of computing with bits on silicon, the global interconnected network spread throughout society computes on us, and is us.

To circle back where we started, the most interesting thing about this process is where it is evolving the most rapidly. The ability of computers to make buy and sell decisions near the speed of light, modeling the complex interactions of assets around the globe, does not lead to superintelligent AI, it merely provides a real-time map of the market's collective intelligence. As the resolution of the map becomes more clear, and this system continues to evolve, we'll find that, in the end, the Singularity is less an "intelligence explosion" and more of a grand convergence of all the data and information we produce into a single point.

What will be there at the center of it? Saffo’s “global economic observatory” to govern the world as it looks back on itself.