Pages

Friday, 31 October 2014

For example, “Qualified Money” would introduce
reputation-dependent conversion rates to reward investments into quality.
In this post, I will discuss the roots of the EURO crisis and some ideas how to overcome
it.

Short
history of money and future of Bitcoin

To discuss the future of money, let us first look back a little bit. By inventing a universally interchangeable good, the historical invention of money made the exchange of goods much easier. But while money was based on valuable materials such as gold in the beginning, it was later increasingly replaced by symbolic values, such as paper bills, or even entries in a digital account. Now, money is created in great amounts not only by central banks, but normal banks do it as well. If, one day, we don't trust anymore that we will get valuable goods in exchange, it is obvious that the value of money will be gone. This process is known as "hyper-inflation." In human history, this has happened many times.

Bitcoin is an attempt to avoid that such a horrible scenario can happen again.It's a peer-to-peer payment system, which does not require banks anymore.But it has other problems. BitCoins are designed such that the overall amount of this digital currency is slowly growing, and it will eventually saturate, thereby establishing something like a new "gold standard." However, we have seen in history that a gold standard is not flexible enough to enable a resilient financial system. If the volume of money grows more quickly than economic output, we will eventually have inflation. Then, the value of money, i.e. its purchasing power, will drop. Conversely, if the volume of money does not grow as quickly as productivity, we run into another problem, called "deflation." Then, money becomes more valuable over time, and people would hoard rather than spend it, as they can buy goods more cheaply in the future. Under such conditions, business cannot thrive. So, the volume of money should grow proportional to productivity, at least on average. Let me add though that the above dependencies regarding inflation and deflation are expected to hold only on the long run. Central banks and other stakeholders can manipulate financial markets, which may create delayed adjustments, biases, and abnormal market behaviors. As a consequence, it becomes increasingly difficult to interpret market signals correctly, and to respond to them in a proper way. On the long run, I think, loss of control is almost inevitable.

A new kind
of money - How the idea was born

For the above reasons, Bitcoin will
not be the final solution, even though the technology behind it will be crucial
for future currencies and other services requiring secure transactions. Altogether,
however, we need to fundamentally re-invent money, as it is not adaptive enough
for our complex world. We have to ask ourselves, why the financial systems
keeps crashing since thousands of years, and what is fundamentally wrong with
the way we have set it up.

Thinking about it for a couple of
years, I came to the conclusion that, even though money is a great invention, it's
outdated. Therefore, it's time to create a better one. The argument goes as
follows. Currently, money is a scalar, i.e. the simplest mathematical quantity
one can think of. It is neither multi-dimensional nor does it have a memory. But
mathematics offers a much richer spectrum of concepts to define exchange
processes, such as vectors, i.e. multi-dimensional quantities, and network graphs.
In fact, money comes from somewhere and goes somewhere else. Who transfers
money to whom defines a network of money flows. Therefore, money should be
represented by network quantities. And money should be multi-dimensional to
allow other things to happen apart from the eternal ups and downs.

This made me think about "Qualified
Money" – multi-dimensional money with a memory. Since Roman times, people
have said: "Money doesn't stink!" In other words: it does not matter
where it comes from and how it is earned. However, what if we could give it a
scent, like a perfume? And what if this would co-determine the value of money? In
a discussion during a visit in Zurich, my colleagues Tobias Preis, Dave Rand,
and Ole Peters were fascinated by this idea. Later on, I combined it with a
reputation system and called the new concept "Qualified Money." Such money
could earn reputation and, with this, additional value! This approach has
commonalities with local currencies, but is more general and relates to the way,
modern stock markets work. However, Qualified Money opens entirely new
possibilities.

What's
wrong with our financial architecture?

One of the problems of today's
financial system is the possibility of cascade effects. What started as a local
problem in the Californian real estate market became a world financial and
economic crisis, eventually causing social and political unrests. But how could
it come that far? For this, see the figure below. The world financial system
lacks engineered breaking points to stop cascades. At home, everyone of us has
electrical fuses to make sure that a local electrical overload would not cause
a larger problem, e.g. the house to burn down. In the financial system, in
contrast, the strategy is just the opposite: to ease the load on troubled
banks, some of their problems have been taken on by the states, which are now
in trouble as well, and so on. Rather than isolating infected "patients"
and curing them by a Marshall-plan-like intensive care program, we infected
many other countries that were healthy before. In this way, the overall damage
became much larger than it could have been, and it is not clear how we will
ever recover from the debt levels. If we don't get the problems solved any time
soon, cultural values such as tolerance and solidarity, or even peace might be
in danger. It's now the very fabric of our society, which is at stake. In
societies with mass unemployment, it can take two generations or more until the
good relationship between citizens and their state and a healthy social
structure recovers.

What worries even more is the fact that we don't currently have a backup financial system. For most other systems that may fail, we have contingency plans – a "plan B" or "plan C." In fact, one might argue that one reason why our current financial system performs badly is the absence of competing financial systems. Given that we believe in competition, why don't we take this seriously and build alternative systems, which could also serve as backups, as plans B or C? It's not enough to complain about having to bail out banks and about lacking alternatives. I also doubt that tougher regulations won't fix the problems. As large banks can handle the additional regulations best, while small and medium-sized banks struggle with them, these regulations may cause big banks to grow even bigger. Therefore, we should rather promote alternatives. In fact, with Bitcoin and peer-to-peer lending systems, some alternatives are eventually emerging, but we need more and better ones.

At the moment, I would say, we cannot take it for granted that the current financial system will still work in 10 or 20 years from now. Most industrial states have debts of the order of 100 to 200 percent of the gross domestic product (GDP), sometimes even a multiple of this. Controlled inflation has been considered to be a recipe to reduce these debts. The trick can work, if applied by a single country or just a few ones. However, if the USA, Europe, Japan, and further countries are all trying to reduce their debts in such a way at the same time, this may trigger an inflationary spiral that can get out of hand.

Besides, the attempts of central banks to control the level of inflation haven't worked well so far. To save banks, to encourage investments into the real economy, and to increase the level of inflation, central banks have pumped massive, almost unimaginable amounts of money into the financial markets. However, as it turns out, more than five years after the financial crisis started, companies still have difficulties to lend money for investments, many banks are still in a shaky position, and there are even worries about deflationary tendencies.

Why is this? Banks don't trust that companies would pay their loans back and, besides, they need more capitalization themselves. Most money created by the central banks does not reach the companies in need. Instead, given the low interest rates, money is mainly invested into financial markets. This drives up stock prices even though real economic growth is negligible. Rising stock prices create further incentives for virtual investments at the stock markets rather than real investments into companies. Consequently, central banks have created a gigantic bubble in the financial markets. In some sense, a virtual inflation has happened over there – the stocks have become more expensive even though most companies haven't grown. When this stock market bubble bursts, a large fraction of the money will flee into real values. This will suddenly drive enormous price inflation, as there are not enough material values that these huge amounts of money can buy. Therefore, inflation might easily get out of control. So far, this did not happen as misleading incentives have caused a temporary allocation of money in the stock markets. In the meantime, low interest rates are undermining the perspectives of life insurances and pension funds.

An
unfeasible control problem

I have argued above that the central
banks haven't been able to reach the effects they wanted. But this is not
because they wouldn’t be competent. It's because the control problem they are expected
to solve is ill-defined – it's literally unsolvable. The reason is that they
don't have enough instruments or, to put it differently, not enough control
variables. The central banks can increase the volume of money and they can
change the interest rate. That's basically it. They may also buy and sell
bonds, but many think they shouldn't. The classical instruments of central
banks are apparently not sufficient to do the job. In other words, the weapons
of central banks are blunt. New possibilities are urgently needed, and this
basically means additional ways of adaptation.

Why is this so? Let's take an
example from the world of taxes. Apart from raising money for public
engagements and investments, taxes are often used to incentivize or discourage certain
kinds of behaviors. For example, many countries have taxes on cigarettes,
alcohol, and fuel, to reduce their consumption. They may also offer tax
reductions for investments into environmental-friendly heating, better home
insulation, or buying solar panels, to promote the production of renewable
energy. It is clear that each of these goals can be achieved by suitable
taxation-based incentives. But what happens if one tries to reach many goals by
one single "control variable," the overall amount of money to be paid
for taxes? One may end up investing into solar energy production, while smoking
more cigarettes, which altogether would not change the individual tax level. So,
on average people may not be very responsive to a multitude of rewards and
sanctions. In other words, we are unlikely to reach many goals with a single
control variable such as one-dimensional money.

More
control variables needed

This problem is actually well-known
from control theory. For example, complex chemical production processes cannot
be steered by a single control variable such as the temperature or
concentration of a certain chemical ingredient. In a complicated production
process, one must be able to control many different variables, such as the
pressure and the concentrations of all ingredients. It is also instructive to
compare this with ecosystems. The plant and animal life in a place will not just
be determined by a single control variable such as the amount of water, but also
by the temperature, humidity, and various kinds of nutrients such as oxygen,
nitrogen, phosphor, etc. Our bodies, too, require many kinds of vitamins and
nutrients to be healthy. So, why should our economic system be different? Why
shouldn't a healthy financial system need several kinds of money?

If we had different kinds of money,
we could probably influence how much of the money handed out by central banks
is finally used by companies for real investments. This would require at least one
additional kind of money. So, let us assume that, besides cash and goods, we
would have two kinds of electronic money: "real" electronic money
("REMO") and "virtual" electronic money ("VEM").
For example, besides real electronic EUROs, we could introduce "AEROs"
as virtual electronic money. By law, cash and real electronic money could be
invested into goods and real investments, but not into financial products.
Virtual electronic money, in contrast, could be invested into financial
products, but not into goods. The important point is now that the central bank
could hand out REMO and VEM at different interest rates. If REMO were handed
out at a lower interest rate than VEM, this would incentivize real investments.

Two kinds
of electronic money

Of course, cash, REMO and VEM could
be converted into each other. However, by means of conversion fees, one could
also create incentives for one kind of money as compared to the other(s). This
would create new "degrees of freedom," as a physicists would say,
which would enable a better adaptation of the financial system to the actual needs.
For example, if REMO earns some interest rate but cash not, or if cash loses
value due to inflation, this speaks against saving large amounts of cash. It
would be better to spend cash on consumption, or to turn it into REMO or VEM.
If lending REMO is cheaper than lending VEM, it will incentivize real
investments over virtual investments into financial products. If VEM can be
converted into REMO for free, but converting REMO into VEM is costly, this
again incentivizes real investments.

So, this little extension of our
financial system will allow the central banks to more effectively stimulate
real investments into companies' production capacities. Central banks would not
have to produce anymore a bubble of cheap money, which will sooner or later
overheat the financial and real estate markets. As we have seen in the past,
this can cause dangerously large bubbles, which will sooner or later produce
large-scale global damage, when they burst.

Europe's
"little" mistake

But we should dare to think one step
further. While the economy in the USA and the United Kingdom seem to be
recovering from the 2008 financial crash and the subsequent economic crisis, most
of Europe is still not doing well after several years of struggle – in fact,
some indicators are worse than after the great depression in the 1930ies. In
January 2014, Nobel prize winner Joe Stiglitz (*1943) summarized the situation in
Basel, Switzerland, as follows: before the crisis, Europe was doing very well.
It had some of the strongest economies in the world, it had some of the best
public infrastructures, best education systems, best health systems, and social
systems. However, Europe did a "little" mistake: without creating a
sufficiently sophisticated institutional framework, it introduced a new
currency, the EURO, which replaced more than a dozen other currencies.
Altogether, this created more problems than benefits, he judged.

We are not talking here about the
widespread complaints of citizens that the EURO made life more expensive – be
they justified or not. Instead, we must talk about the fact that, if we compare
all countries on a one-dimensional scale such as the gross domestic product
(GDP) per capita, there will be always
winners and losers. In this case, Germany happens to be a winner and Greece a
loser, but it could have been different as well. We must recognize that, given
the different productivity of the countries, it was just a matter of time until
economic forces were unleashed, which required adjustments. In the past, this adjustment
happened naturally by adapting the currency exchange rates. Now, in more than
15 European countries, this is not possible anymore. As above, this problem can
again be solved by adding new "degrees of freedom," i.e. new control
variables. But how to introduce these variables without giving up the EURO,
which many consider an important peace-building project in Europe?

Vitamins
for the financial system

In the following, I will suggest to
introduce "Qualified Money". Qualified Money has a number of different
qualifiers, which turn money into a multi-dimensional means of exchange. The
value of Qualified Money is given not only by its amount, but also a conversion
factor that depends on various qualities. For example, if one decided that
geographic origin should be a qualifier, one would enable country-specific EUROs,
allowing adjustments of the value of money to the respective economic strength.
The same approach can be used to define regional or local currencies, if
desired. So, one could save the prestige project of the "EURO," by
making the currency more flexible. The regional variants of EUROs would be
converted into each other similarly as we are currently doing it for different
kinds of currencies at the stock markets, such as EUROs, DOLLARs, or YENs.

However, Qualified Money would not have
to be connected to local origin. The concept has potential for extension. For
example, the unemployment rate, the Millennium development goals, or any socio-economic-environmental
factors considered relevant for human well-being could be used to define
qualifiers. In our lives, it's not just money that matters. People care about
many things, and this opens up entirely new possibilities!

We could
all be doing well

It is important to recognize that
both, the self-organization and management of complex dynamical systems require
sufficiently many control variables, not just one. Establishing different kinds
of money would serve this purpose. Compared to the currency system we have
today, these different kinds of money would not be easily convertible. There
would be an adjustable conversion tax or fee, to discourage conversion and to
encourage earning different kinds of money, instead. This would naturally
extend the approach we have discussed above in connection with VEM and REMO,
and it would create a multi-dimensional incentive system, rewarding us for
different kinds of efforts, including social and environmental ones.

Of course, such a conversion tax or
fee would create something like “friction” in the multi-dimensional money
system. However, we know from physics that friction can enable important
functionality. How would it be to have such a multi-dimensional money and
exchange system? Depending on how many dimensions we allow for, everyone could
be doing well, each one on the dimensions fitting his or her personal
strengths, skills, or expertise.

Today we have many ranking systems
to compensate for the lack of such a multi-dimensional money system. Besides
the Fortune 500 list of richest persons, we rank tennis players and soccer
players. Others collect medals, or scores in computer games. Scientists count
citations earned by their publications, etc. Even though some of these ranking
scales don't imply any material value, they can motivate people to make an
effort. Hence, we can turn this into a mechanism to create a multi-dimensional
reward system, as we need it to enable self-organizing socio-economic systems.

One might even consider the
possibility to allow everyone to establish a certain number of own currencies.
In a sense, this would be the logical next step after allowing banks and
BitCoin to create money (rather than just central banks). The value of these
personalized currencies would then depend on how much others trust in them and
are willing to engage in a related value exchange. I assume that, after some
time, there would be just a reasonably small number of successful currencies
that are widely used. However, they might have some interesting new properties
compared to the currencies we have today. Therefore, opening up money creation
to innovations might be really worthwhile.

Money with
a memory

Let us now assume that electronic money
would be traceable. In this case, we could give electronic money a
"memory," and we could make its value dependent on its transaction
history. To put it in simple terms, money that went through the hands of Albert
Einstein or John F. Kennedy could have more value than money that was earned
with "blood diamonds." So, possible qualifiers could be, how the
money was earned, its origin or destination location, the reputation of the
products bought, or the reputation of the producer or seller. Hence, we can further
differentiate electronic money by means of additional qualifiers. This might be
imagined as treating money units like stocks or like individual currencies. In
other words, a (reputation-dependent) conversion factor would apply, when
financial transactions are made.

Benefits of
money with reputation

I recognize that some people might
feel uneasy about money becoming dependent on reputation. However, in some sense,
this is already happening when we go shopping in the Internet. Depending on the
country we live in, the type of computer we are using, and perhaps further personal
qualifiers such as income, we might get different product offers than others,
at different prices. This is part of the logic of personalized recommender
systems. One might find it upsetting to pay a higher price than others, but it
could also be a lower one. When we book an airplane ticket or a hotel room, we
receive different offers, too, depending on when we book and where we book, and
whether we are regular customers or not.

In any case, there are quite some
benefits of reputation-based Qualified Money. For example, it becomes easier
for producers and stores to sell high-quality products at a higher price. Furthermore,
to get an idea how future shopping might look like, assume that there is a
database, in which information about products is stored, such as the amount of
money to be paid, ingredients, durability, level of environmental-friendly
production, level of socially friendly production, and much more. Moreover, assume
our smartphones knows our preferences, for example, that we give the price a
weight of 50%, environmental-friendliness a weight of 30%, and fair production
a weight of 20%, and that we want to avoid products with particular ingredients
we are allergic against. Then, by scanning product codes and retrieving the
related product information, our smartphone will recommend us the best fitting
products. Furthermore, if customers were willing to share their preference
settings, producers and sellers could better tailor their assortment of
products to the customer wishes. Therefore, customers would benefit as well.
They would get more products they would really like to have.

Balancing
transparency and anonymity

If properly set up, Qualified Money
can create a good balance between transparency and anonymity, such that we can
have the benefits of both. Transparency can promote more responsible and
desirable behavior. It allows ethical values and higher quality to survive in a
framework of free economic competition. In fact, a considerable fraction of
people cares about ethics and fair products. Even financial investors are
getting interested in ethical investments, as they tend to be more sustainable.
At the moment we often find ourselves in a situation, where the competition
between companies is so harsh that they have to reduce production costs. This
will sooner or later decrease salary levels, production standards, product
quality and/or sustainability. In the end, we have lower salary levels or
lower-quality products. Both will eventually impact producers as well. In
contrast, reputation mechanisms could stop the undesirable downward spiral, by rewarding
higher quality products and fairer production.

The question is, whether the
transparency needed for such reputation systems will ever be reached? In fact, there
is currently a trend towards more transparency of money flows. We have recently
seen the Swiss banking secrecy melt away. Several times, whistleblowers have
sold confidential information about private accounts to public authorities. "Off-Shore
Leaks" has made international money flows more transparent as well.
Furthermore, there seems to be a "follow the money" program that
tracks individual money transactions. And presently, many countries set up
agreements for an automatic information exchange allowing public authorities to
monitor money flows and to check tax declarations.

Anonymous money exchange is under
attack for similar reasons as anonymous information exchange: in many cases, it
has promoted crime and misery. Nevertheless, anonymity has still important
roles to play. Most of us don't want anybody to know, which toys someone buys
in a sex shop. For such and many better reasons, we still need sufficient
amounts of cash besides traceable electronic money, even though it should lose
its value quickly enough to make traceable transactions sufficiently
attractive.

It should be also remembered that
anonymity is one of the most important elements of democracies. The principle
of anonymous vote is needed for independent decision-making, which is a
precondition for the "wisdom of crowds" to work. Academic peer review
as well is based on anonymity, to support open criticism without fear of revenge.
Organized crime or corruption would also be difficult to fight without
protecting the anonymity of witnesses. So, neither full transparency nor full
anonymity can work. We need a system that makes it possible to combine and
balance both principles. Introducing Qualified Money besides cash is the
solution!

thank you for your
interest in this discussion paper, which is thought to stimulate debate.

What you are seeing
here is work in progress. My plan was to elaborate and polish this material further,
before I share it with anybody else. However, I often feel that it is more
important to share my thoughts with the public now than trying to perfect the content
first while keeping my analysis and insights for myself in times requiring new
ideas.

So, please apologize
if this does not look 100% ready. Updates will follow. Your critical thoughts
and constructive feedback are very welcome. You can reach me via dhelbing (AT) ethz.ch or
@dirkhelbing at twitter.

I hope these
materials can serve as a stepping stone towards mastering the challenges ahead
of us.

I believe that our
society is heading towards a tipping point, and that this creates the
opportunity for a better future.

Friday, 24 October 2014

This is fourth in a series of blog posts that form chapters of my forthcoming book Digital Society. Last week's chapter was titled: CRYSTAL BALL AND MAGIC WAND:The Dangerous Promise of Big DataIn an increasingly complex and interdependent world, we are faced with situations that are barely predictable and quickly changing. And even if we had all the information and means at our disposal, we couldn’t hope to compute, let alone engineer, the most efficient or best state of the system: the computational requirements are just too massive. That’s why the complexity of such systems undermines the effectiveness of centralized planning and traditional optimization strategies. Such efforts might not only be ineffective but can make things even worse. At best we end up “fighting fires” – struggling to defend ourselves against the most disastrous outcomes.

If we’re to have any hope of managing complex systems and keeping them from collapse or crisis, we need a new approach. Whether or not the quote is apocryphal, what Einstein allegedly says holds true here: "We cannot solve our problems with the same kind of thinking that created them." What other options do we have? The answer is perhaps surprising: we have to step back from centralized top-down control, which often ends in failed brute-force attempts to impose a certain behavior. However, as I will now explain, we can find new ways of letting the system work for us.

This means that we should implement principles such as distributed bottom-up control and guided self-organization. What are these principles about and how do they work? Self-organization means that the interactions between the components of the system spontaneously lead to a collective, organized and orderly mode of behavior. That does not, however, guarantee that the state of the system is one we might find desirable, and that's why self-organization may need some "guidance."

Distributed control means that, if we wish to guide the system towards a certain desirable mode of behavior, we must do so by applying the guiding influences in many "local" parts of the system, rather than trying to impose a single global behavior on all the individual components at once. The way to do this is to help the system adapt locally to the desired state wherever it shows signs of deviating. This adaptation involves a careful and judicious choice of local interactions. Guided self-organization thus entails modifying the interactions between the system components where necessary while intervening as little and as gently as possible, relying on the system's capacity for self-organization to attain a desired state.

When and why are these approaches superior to conventional ones? For the sake of illustration, I will start with the example of the brain and then turn to systems such as traffic and supply chains. I will show that one can actually reduce traffic jams based on distributed real-time control, but it takes the right approach. In the remaining chapter of the book, we will explore whether and how these success principles might be extended from technological to economic and even social systems.

The miracle of self-organization

Our bodies represent perfect examples of the virtues of self-organization in generating emergent functions from the interactions of many components. The human brain, in particular, is made up of a trillion information-processing units, the neurons. Each of these is, on average, connected to about a thousand other neurons, and the resulting network exhibits properties that cannot be understood by looking at the single neurons: in our case, not just coordination, impulses and instincts, but also the mysterious phenomenon of consciousness. And yet, even though a brain is much more powerful than today's computers (which are designed in a top-down way and not self-organized), it consumes less energy than a typical light bulb! This shows how efficient the principle of self-organization can be.

In the previous chapter, we saw that the dynamical behavior of complex systems – how they evolve and change over time – is often dominated by the interactions between the system components. That’s why it is hard to predict the behaviors that will emerge – and why it is so hard to control complex systems. But this property of interaction-based self-organization is also one of the great advantages of complex systems, if we just learn to understand and manage them.

System behavior that emerges by self-organization of a complex system's components isn’t random, nor is it totally unpredictable. It tends to give rise to particular, stable kinds of states, called "attractors," because the system seems to be drawn towards them. For example, the figure below shows six typical traffic states, where each of the depicted congestion patterns is an attractor. In many cases, including freeway traffic, we can understand and predict these attractor states using simplified computer models of the interactions between the components (here: the cars). If the system is slightly perturbed, it will usually tend to return to the same attractor state. To some extent this makes the system resilient to perturbations. Large perturbations, however, will drive the system towards a different attractor: another kind of self-organized, collective behavior, for example, a congested traffic state rather than free traffic flow.

The Physics of Traffic

Contrary to what one might expect, traffic jams are not just vehicle queues that form behind bottlenecks. Traffic scientists were amazed when, beginning in the 1990s, they discovered the large variety and complexity of empirical congestion patterns. The crucial question is whether such patterns are understandable and predictable phenomena, such that we can find new ways to avoid congestion. In fact, there is now a theory that allows one to explain all traffic patterns as composites of elementary congestion patterns (see figure above). This theory can even predict the size and delay times caused by congestion patterns – it is now widely regarded as one of the great successes of complexity theory.

I started to develop this theory when I was working at the University of Stuttgart, Germany, together with Martin Treiber and others. We studied a model of freeway traffic in which each vehicle was represented by a computer “agent” – a driver-vehicle unit moving along the road in a particular direction with a preferred speed, which however would slow down whenever this was necessary to avoid a collision. Thus the model attempted to “build” a picture of traffic flow from the bottom up, based on simple interaction rules between the individual agents, the driver-vehicle units. Based on this model, we could run computer simulations to deduce the emergent outcomes resulting in different kinds of traffic situations.

For example, we simulated a multi-lane freeway with a bottleneck created by an on-ramp, where additional vehicles entered the freeway. At low vehicle densities, traffic flow recovered even from large perturbations in the flow such as a massive vehicle platoon. In sharp contrast, at medium densities even the slightest variation in the speed of a vehicle triggered a breakdown of the flow – the "phantom traffic jams" we discussed before. In between, however, there was a range of densities (called "bistable" or "metastable"), where small perturbations faded away, while perturbations larger than a certain size (the "critical amplitude") caused a traffic jam.

Interestingly, when varying the traffic flows on the freeway and the on-ramp in the presence of a small perturbation, we found all the empirical congestion patterns shown above. In essence, most traffic jams were caused by a combination of three elements: a bottleneck, high traffic flows, and a perturbation in the flow. Moreover, the different traffic states could be arranged in a so-called "phase diagram" (see below). The diagram schematically presents the flow conditions under which each type of pattern remains stable, and the boundaries that separate these regimes. Empirical observations nicely support this theoretical classification of possible traffic patterns.

A capacity drop, when traffic flows best!

Can we use this understanding to improve traffic flows? To overcome congestion, we must first recognize that the behavior of traffic flows can be counter-intuitive, as the "faster-is-slower effect" shows (see Information Box 1). Imagine a stretch of freeway joined by an on-ramp, on which the traffic density is relatively high but the flow is smooth and free of jams, and not prone to jam formation triggered by small disturbances. Suppose now we reduce the density of vehicles entering the considered freeway stretch for a short time. You might expect that traffic will flow even better. But it doesn’t. Instead, vehicles accelerate into the area of smaller density – and this behavior can trigger a traffic jam! Just when the entire road capacity is urgently needed, we find a breakdown of capacity, which can last for hours and can increase travel times by a factor of two, five, or ten. A breakdown may even be triggered by the perturbation created by a simple overtaking maneuver of trucks.

It's cynical that the traffic flow becomes unstable when the maximum throughput of vehicles is reached – that is, exactly in the most efficient state of operation from an “economic” point of view. To avoid traffic jams, therefore, we would have to stay sufficiently far away from this “maximally efficient” traffic state. But doesn’t this mean we must restrict ourselves to using the roads at considerably less efficiency than they are theoretically capable of? No it doesn’t, if we just build on guided self-organization.

Avoiding traffic jams

Traffic engineers have sought ways to improve traffic flows at least since the early days of computers. The classical "telematics" approach to reduce congestion is based on the concept of a traffic control center that collects information from a lot of traffic sensors, then centrally determines the best strategy and implements it in a top-down way – for instance, by introducing variable speed limits on motorways or using traffic lights at junctions. Recently, however, researchers and engineers have started to explore a different approach: decentralized and distributed concepts, relying on bottom-up self-organization. This can be enabled, for example, by car-to-car communication.

In fact, I have been involved in the development of a new traffic assistance system that can reduce congestion. From the slower-is-faster effect, we can learn that, in order to avoid or delay the breakdown of traffic flows and to use the full freeway capacity, it is important to smooth out perturbations of the vehicle flow. With this in mind, we have developed a special kind of adaptive cruise control (ACC) system, where distributed control attempts are made by a certain percentage (e.g. 30%) of ACC-equipped cars, while a traffic control center is not needed for this. The ACC system accelerates and decelerates a car automatically based on real-time data from a radar sensor, measuring the distance to the car in front and the relative velocity. Such radar-based ACC systems existed already before, but in contrast to conventional ACC systems, ours does not just aim to increase the driver’s comfort by eliminating sudden changes in speed. It also increases the stability and capacity of the traffic flow by taking into account what other nearby vehicles are doing, thereby supporting a favorable self-organization of the entire traffic flow. This is why we call it a traffic assistant system rather than a driver assistant system.

The distributed control approach of the underlying ACC system is inspired by fluids flows, which do not suffer from congestion: when we narrow a garden hose, the water simply flows faster through the bottleneck. To sustain the traffic flow, one can either increase the density or speed of vehicles, or both. The ACC system we developed with the Volkswagen company imitates the natural interactions and acceleration of driver-vehicle units, but in order to increase the vehicle flow where needed, it slightly reduces the time gap between successive vehicles. Additionally, our special ACC system increases the acceleration of vehicles out of the traffic jam to stabilize the flow.

In essence, we modify the driving parameters determining the acceleration and interactions of cars such that the traffic flow is increased and stabilized. The real-time measurement of distances and relative velocities by radar sensors allows the cars to adjust their speeds in a way that is superior to human drivers. This traffic assistant system, which I developed together with Martin Treiber, Arne Kesting, Martin Schönhof, Florian Kranke, and others, was also successfully tested under real traffic conditions.

Cars with collective intelligence

A key issue for the operation of the adaptive cruise control is to identify, where it needs to kick in and alter the way a vehicle is being driven. These locations can be figured out by connecting the cars into a communication network. Many new cars contain a lot of sensors that can be used to give them “collective intelligence.” They can perceive their driving state and features of their local environment (i.e. what nearby cars are doing), communicate with neighboring cars (through wireless inter-vehicle communication), make sense of the situation they are in (e.g. assess the surrounding traffic state), take autonomous decisions (e.g. adjust driving parameters such as the speed), and give advice to drivers (e.g. warn of a traffic jam behind the next curve). In a sense, such vehicles acquire also "social" abilities: they can coordinate their movements with those of others.

According to our computer simulations, even if only a small proportion of cars is equipped with such ACC systems, this can have a significant positive effect on the overall traffic situation. In contrast, most driver assistant systems today are still operating in a "selfish" way rather than creating better flow conditions for everyone. Our special, "social" solution approach, seeking to reach systemic benefits through collective effects of local interactions, is a central feature of what I call Socio-Inspired Technologies.

A simulation movie we have created illustrates how effective this approach can be (see http://www.youtube.com/watch?v=xjodYadYlvc). While the ACC system is turned off, the traffic develops the familiar and annoying stop-and-go waves of congestion. When seen from a bird’s-eye view, it becomes evident that the congestion originates from small perturbations triggered by vehicles attempting to enter the freeway via an on-ramp. But once the ACC system is turned on, these stop-and-go waves vanish and traffic flows freely. In other words, modifying the interactions of vehicles based on real-time measurements allows us to produce coordinated and efficient flows in a self-organized way. Why? Because we have changed the interaction rules between cars based on real-time adaptive feedback, handing over responsibility to the autonomously driving system. With the impending advent of “driverless cars” such as those being introduced by Google, it’s clearer than ever that this sort of intervention is no fantasy at all.

Guided self-organization

So we see that self-organization may have favorable results (such as free traffic flows) or undesirable ones (such as congestion), depending on the nature of the interactions between the components of the system. Only a slight modification of these interactions can turn bad outcomes into good ones. Therefore, in complex dynamical systems, "interaction design" – also known as "mechanism design" – is the secret of success.

Self-organization based on modifications of interactions or institutional settings – so-called "guided self-organization" – utilizes the hidden forces acting in complex dynamical systems rather than opposing them. In a sense, the superiority of this approach is based on similar principles to those of Asian Martial Arts, where the forces created by the opponent are turned to one’s own advantage. Let’s have a look at another example: how best to coordinate traffic lights.

Self-organizing traffic lights

Relative to freeway flows, urban traffic flows incur additional challenges. Here the roads are connected into complex networks with many junctions, and the problem is mainly how to coordinate the traffic at all these intersections. When I began to study this difficult problem, my goal was to find an approach that would work not only when conditions are ideal but also when they are impaired or complicated, for example because of irregular road networks, accidents or building work. Given the large variability of urban traffic flows over the course of days and seasons, the best approach turned out to be one that adapts flexibly to the prevailing local travel demands, not one that is planned or optimized for "typical" (average) traffic flows. Rather than imposing a certain control scheme for switching traffic lights in a top-down way, as it is done by traffic control centers today, I concluded that it is better if the lights respond adaptively to the actual local traffic conditions. In this self-organizing traffic-light control, the actual traffic flows determine, in a bottom-up way and in real time, how the lights switch.

The local control approach was inspired by my previous experience with modeling pedestrian flows. These tend to show oscillating flow directions at bottlenecks, which look as if they were caused by “pedestrian traffic lights”, even though they are not. The oscillations are in fact created by changes in the crowd pressure on both sides of the bottleneck – first the crowd surges through the constriction in one direction, then in the other. This, it turns out, is a relatively efficient way of getting the people through the bottleneck. Could road intersections perhaps be understood as a similar kind of bottleneck, but with more flow directions? And could flows that respond similarly to the local traffic “pressure” perhaps generate efficient self-organized oscillations, which could in turn control the switching sequences of the traffic lights? Just at that time, a student named Stefan Lämmer knocked at my door and asked to write a PhD thesis in my team about this challenging problem. So we started to investigate this.

How to outsmart centralized control

How does self-organizing traffic light control work, and how successful is it? Let’s first look at how it is currently done. Many urban traffic authorities today use a top-down approach coordinated by some control center. Supercomputers try to identify the optimal solution, which is then implemented as if the traffic center were a "benevolent dictator." A typical solution creates "green waves" of synchronized lights. However, in large cities even supercomputers are unable to calculate the optimal solution in real time – it's too hard a computational problem, with just too many variables to track and calculate.

So the traffic-light control schemes, which are applied for certain time periods of the day and week, are usually optimized "offline." This optimization assumes representative (average) traffic flows at a certain day and time, or during events such as soccer matches. In the ideal case, these schemes are then additionally adapted to the actual traffic situation, for example by extending or shortening the green phases. However, at a given intersection the periodicity of the switching scheme (in what order the road sections get a green light) is usually kept the same. Within a particular control scheme, it’s mainly the length of the green times that is altered, while the order of switching just changes from one applied scheme to another.

Unfortunately, the efficiency of even the most sophisticated of these top-down optimization schemes is limited by the fact that the variability of traffic flows is so large that average traffic flows at a particular time and place are not representative for the traffic situation on any particular occasion at that time and place. The variation in the number of cars behind a red light and the fraction of vehicles turning right or going straight is more or less as big as the corresponding average values. This implies that a pre-planned traffic light control scheme isn't optimal at any time.

So let us compare this classical top-down approach carried out by a traffic control center with two alternative ways of controlling traffic lights based on the concept of self-organization (see illustration below). The first, called selfish self-organization, assumes that each intersection separately organizes its switching sequence to strictly minimize the travel times of the cars on the road sections approaching it. The second, called other-regarding self-organization, also tries to minimize the travel times of these cars, but aims before all else to clear the vehicle queues that exceed some critical length. Hence, this strategy also takes into account the implications for neighboring intersections.

How successful are the two self-organizing schemes compared to the centralized one? We’ll assume that at each intersection there are detectors that measure the outflows from its road sections and also the inflows into these road sections coming from the neighboring intersections (see illustration below). The information exchange between neighboring intersections allows short-term predictions of the arrival times of vehicles. The locally self-organizing traffic lights adapts to this prediction in a way that tries to keep vehicles moving and to minimize waiting times.

When the traffic flow is sufficiently far below the intersection capacity, both self-organization schemes produce well-coordinated traffic flows that are much more efficient than top-down control: the resulting queue lengths behind red traffic lights are much shorter (in the figure below, compare the violet dotted and blue solid line with the red dashed line). However, for selfish self-organization, the process of local optimization only generates good results below a certain traffic volume. Long before the maximum capacity utilization of an intersection is reached, the average queue length tends to get out of control, as some road sections with small traffic flows are not served frequently enough. This creates spillover effects – congestion at one junction leaks to its neighbors – and obstructs upstream traffic flows, so that congestion quickly spreads over large parts of the city in a cascade-like manner. The resulting state may be viewed as a congestion-related "tragedy of the commons," as the available intersection capacities are not anymore efficiently used. Due to this coordination failure between neighboring intersections, when the traffic volumes are high, today’s centralized traffic control can produce better flows than selfish self-organization, and that's actually the reason why we run traffic centers.

Yet by changing the way in which intersections respond to information about arriving vehicle flows, it becomes possible to outperform top-down optimization attempts over the whole range of traffic volumes that an intersection can handle (see the solid blue line). To achieve this, the rule of waiting time minimization must be combined with a second rule, which specifies that a vehicle queue must be cleared immediately whenever it reaches a critical length (that is, a certain percentage of the road section). This second rule avoids spill-over effects that would obstruct neighboring intersections and thereby establishes an "other-regarding” form of self-organization. Notice that at high traffic volumes, both the local travel time minimization (dotted violet line above) and the clearing of long queues (black dash-dotted line) perform badly in isolation, but when combined, they produce a superior way of coordination. One would not expect that two bad strategies in combination might produce the best results!

One advantageous feature of the self-organization approach is that it can use gaps that occur in the traffic as opportunities to serve other traffic flows. In that way, the coordination arising between neighboring traffic lights can spread over many intersections in a self-organized way. That’s how other-regarding self-organization can outsmart top-down control trying to optimize the system: it responds more flexibly to actual local needs, thanks to a coordinated real-time response.

Therefore, what will the role of traffic control centers be in the future? Will they be obsolete? Probably not. They will still be used to keep an overview of all urban traffic flows, to ensure information flows between distant parts of the city, and to implement political goals such as limiting the overall flows into the city center from the periphery.

A pilot study

After this promising study, Stefan Lämmer approached the public transport authority in Dresden to collaborate with them on traffic light control. The traffic center was using an adaptive state-of-the art control scheme based on "green waves." But although it was the best available on the market, they weren’t happy with it. In particular, they were struggling to manage the traffic around a busy railway station in the city center. There, the problem was that many public transport lines cut through a highly irregular road network, and the overall goal was to prioritize public transport rather than road traffic. However, if trams and buses were to be given a green light whenever they approached an intersection, this would destroy the green waves in the vehicle flows, and the resulting congestion would quickly spread, causing massive disruption over a huge area of the city.

When we applied our other-regarding self-organization scheme of traffic lights to the same kind of empirical inflow data that had been used to calibrate the current control scheme, we found a remarkable result. The waiting times were reduced for all modes of transport: considerably so for public transport and pedestrians, and somewhat also for vehicles. The roads were less congested, trams and buses were prioritized, and travel times became more predictable. In other words, everybody would benefit from the new approach (see figure below) – including the environment. It is just logical that the other-regarding self-organization approach is now being implemented at some traffic intersections in Dresden.

Lessons learned

From this example of traffic light control, we can draw a number of important conclusions. First, in complex systems with strongly variable and largely unpredictable dynamics, bottom-up self-organization can outperform top-down optimization by a central controller – even if that controller is kept informed by comprehensive and reliable data. Second, strictly local optimization may create a highly performing system under some conditions, but it tends to fail when interactions between the system components are strong and the optimization at each location is selfish. Third, an "other-regarding" approach that takes into account the situation of the interaction partners can achieve good coordination between neighbors and superior system performance.

In conclusion, a central controller will fail to manage a complex system because the computational demands needed to find the best solutions are overwhelming. Selfish local optimization, in contrast, ultimately fails because of a breakdown of coordination, when the system is used too much. However, an other-regarding self-organization approach based on local interactions can overcome both problems, producing resource-efficient solutions that are robust against unforeseen disturbances.

In many cities, there has recently been a trend towards replacing signal-controlled intersections with roundabouts, and towards changing urban spaces controlled by many traffic signs and rules in favor of designs that support voluntary, considerate interactions of road users and pedestrians. In other words, the self-organization approach is spreading.

As we will see in the chapters on Human Nature and the Economy 4.0, many of the conclusions we have drawn from traffic flows are relevant for socio-economic systems as well. These are also systems in which agents often have incompatible interests that cannot be satisfied at the same time... Production processes are an example for this as well.

Self-organizing production

Problems of coordinating flows appear also in man-made systems other than traffic and transportation. About ten years ago, together with Thomas Seidel and others, I began to study how production plants could be operated more efficiently, and be better designed. In the paper and packaging production plant we studied, we observed bottlenecks that occurred from time to time. When this happened, a jam of products waiting to be processed propagated upstream, while the shortfall in the number of finished products grew downstream (see illustration below). We noticed that there were quite a few analogies with traffic systems. For example, road sections are analogous to storage buffers where partly finished products can accumulate. Product-processing units are like road junctions, different product flows have different origins and destinations (like vehicles), production schedules function like traffic lights, cycle times are analogous to travel and delay times, full “buffer” sections suffer from congestion, and machine breakdowns are like accidents. However, modeling production is even more complicated than modeling traffic, as there are many different kinds of material flows.

Drawing on our experience with traffic models, we devised an agent-based model for these production flows. We focused again on how local interactions can govern and potentially assist the flow. We imagined equipping all machines and all products with a small "RFID" computer chip having memory and wireless short-range communication ability – a technology already widely implemented in other contexts, such as tagging of consumer goods. This would enable a product to communicate with other products and machines in the neighborhood (see figure below). For example, a product could signal that it was delayed and needed prioritized processing, requiring a kind of over-taking maneuver. Products could also select between alternative routes, and tell the machines what had to be done with them. They could cluster together with similar products to ensure efficient processing.

In the past, designing a good factory layout in a top-down way has been a complicated, time-consuming and expensive procedure. Bottom-up self-organization is again a superior approach. The above-described agent-based approach building on local interactions has a phenomenal advantage: it makes it easy to test different factory layouts without having to specify all the details of the fabrication plant. One just has to put the different elements of a factory together (such as machines and transportation units). The possible interactions are then specified automatically. The machines know immediately what to do with the products – because those products already bear with them the necessary instructions. Here too the local exchange of information between agents creates a collective, social intelligence. Given these favorable circumstances, it is easily possible to create and test many different factory layouts and to find, which are more efficient and more resilient to perturbations.

In the future one may even go a step further. If we consider that recessions are like traffic jams in the world economy, where capital or product flows are obstructed or delayed, couldn't real-time information about the world's supply networks be used to reduce economic disruptions? I actually think so. Therefore, if I had access to the data of the world-wide supply chains, I would be delighted to build an assistant system for global supplies that reduces cases of overproduction and situations, where resources are lacking.

Making the Invisible Hand work

We, therefore, see that vehicles and products can successfully self-organize if a number of conditions are fulfilled. First, the interacting system components are provided with real-time information. Second, there is prompt feedback – that is to say, appropriate rules of interaction – which ensures that this information elicits a suitable, adaptive response. (In later chapters, I will discuss in detail how such information can be gathered and how such interaction rules are determined.)

So, would a self-organizing society be possible? In fact, for hundreds of years, people have been inspired by the self-organization and social order in colonies of social insects such as ants, bees, or termites. For example, Bernard Mandeville’s The Fable of Bees (1714) argues that actions driven by private, even selfish motivations can create public benefits. A bee hive is an astonishingly differentiated and complex, well-coordinated social system, even though there is no hierarchical chain of command. No bee orchestrates the actions of the other bees. The queen bee simply lays eggs, and all other bees perform their respective roles without being told so. Adam Smith's "Invisible Hand" expresses a similar idea, namely that the actions of people, even if driven by the 'selfish' impulse of personal gain, would be invisibly coordinated in a way that automatically improves the state of the economy and the society. One might say that, behind this, there is often a believe in something like a divine order.

However, the recent global financial and economic crisis has questioned that complex systems would always produce the best possible outcomes by themselves. Phenomena such as traffic jams and crowd disasters suggest as well that a laissez faire approach that naively trusts into the "Invisible Hand" often fails. The same applies to failures of cooperation, which may result in the over-utilization of resources as discussed in the next chapter.

Nevertheless, whether the self-organization of a complex dynamical system ends in success or failure mainly depends on the interaction rules and institutional settings. I therefore claim that, three-hundred years after the principle of the Invisible Hand was postulated, we can finally make it work – based on real-time information and adaptive feedbacks to ensure the desired functionality. While the Internet of Things can provide us with the necessary real-time data, complexity science can inform us how choose the interaction rules and institutional settings such that the system would self-organize towards a desirable outcome.

Information technologies to assist social systems

Above, I have shown that self-organizing traffic lights can outperform the optimization attempts of a traffic control center. Furthermore, "mechanism design," which modifies local vehicle interactions by suitable driver assistant systems, can turn self-organization into a principle that helps to reduce rather than produce congestion. But these are technological systems.

Could we also design an assistance system for social behavior? In fact, we can! Sometimes, social mechanism design can be pretty challenging, but sometimes it's easy. Just imagine the task to share a cake in a fair way. If social norms allow the person who cuts the cake to take the first piece, this will often be bigger than the others. If he or she is to take last, the cake will probably be distributed in a much fairer way. Therefore, alternative sets of rules that are intended to serve the same goal (such as cake cutting), may result in completely different outcomes.

As Information Box 2 illustrates, it is not always easy to be fair. But details in the "institutional setting" – the specific "rules of the game" – can matter a lot. With the right set of interaction rules, we can, in fact, create a better world. The next chapter discusses, how the respective social mechanisms, which are part of our culture, can make a difference, and how one can build an assistant system to support cooperation in situations where it would otherwise be unlikely. Information and communication technologies are now offering entirely new opportunities!

INFORMATION BOX 1: Slower-is-faster effect

Let me illustrate with an example, how counter-intuitive the behavior of traffic flows can be (see picture above). When the traffic flow is sufficiently high, but still stable, a temporary reduction in the vehicle density (which locally allows drivers to move at a faster speed) can surprisingly cause a traffic jam. How does this "faster-is-slower effect" happen? First, the temporary perturbation of the vehicle density changes its shape, while traveling along the freeway. Then, it eventually causes a forwardly moving vehicle platoon, which grows in the course of time. Consequently, the perturbation of the traffic flow propagates downstream and eventually passes the location of the on-ramp. As the vehicle platoon is still moving forward, one would think that the perturbation will eventually leave the freeway stretch under consideration. But at a certain point in time, the vehicle platoon has grown so big that it suddenly changes its propagation direction, i.e. it starts to travel backward rather than downstream. This is called the "boomerang effect." The effect occurs, because vehicles in the cluster are temporarily stopped, when the vehicle platoon has reached a certain size. At the front of the cluster, vehicles are moving out of the traffic jam, while new vehicles join the traffic jam at the end. Altogether, this makes the traffic jam travel backwards, such that it eventually reaches the location of the on-ramp. When this happens, the inflow of cars via the on-ramp is perturbed so much that the upstream traffic flow breaks down. This causes a long vehicle queue, which continues to grow upstream. Therefore, even when the road capacity could theoretically handle the overall traffic flow, a perturbation in the traffic flow can cause a drop in the freeway capacity, which results from the interactions between cars. The effective capacity of the freeway is then given by the outflow from the traffic jam, which is about 30 percent below the maximum traffic flow on the freeway!

INFORMATION BOX 2: Fair supply in times of crises

In case of a shortage of resources that are required to satisfy our basic needs (such as food, water, and energy), it might be particularly important to share them in a fair way. Otherwise, violent conflicts for scarce resources might break out. But it is not always easy to be fair and requires suitable preparations. Together with Rui Carvalho, Lubos Buzna, and others, I have investigated this for cases such as gas supply through pipelines. There, we may visualize the percentages of pipeline use towards different destinations by a pie chart. It turns out that we must now cut several cakes (or pies) at the same time. Given multiple constraints by pipeline capacities, it is usually impossible to meet all goals and constraints at the same time. Therefore, one will often have to make compromises. Paradoxically, if overall less gas is transported due to non-deliveries from a source region of gas, fair sharing requires a re-routing of gas from other source regions. This will often lead to pipeline congestion problems, since the pipeline network was built for different origin-destination relationships. Nevertheless, an algorithm inspired by the Internet routing protocol can maximize fairness.

Thank you for your interest in this chapter, which is thought to stimulate debate.

What you are seeing here is work in progress, a chapter of a book on the emerging Digital Society I am currently writing. My plan was to elaborate and polish this further, before I share this with anybody else. However, I often feel that it is more important to share my thoughts with the public now than trying to perfect the book first while keeping my analysis and insights for myself in times requiring new ideas.

So, please apologize if this does not look 100% ready. Updates will follow. Your critical thoughts and constructive feedback are very welcome. You can reach me via dhelbing (AT) ethz.ch or @dirkhelbing at twitter.

I hope these materials can serve as a stepping stone towards mastering the challenges ahead of us and towards developing an open and participatory information infrastructure for the Digital Society of the 21st century that would enable everyone to take better informed decisions and more effective actions.

I believe that our society is heading towards a tipping point, and that this creates the opportunity for a better future.

FuturICT Hubs

Followers

FET Flagship Initiative

The activities leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 284709 - project 'FuturICT', a Coordination and Support Action in the Information and Communication Technologies activity area