Pages

Tuesday, 25 August 2015

by Dirk HelbingMoore's law, describing the
exponential explosion of processing power and data production, is currently
driving a fundamental transformation of our economy and society. While
processing power doubles every 18 months, data volumes double every 12 months,
which means that we literally produce as much data in one year as in the entire
history of humankind (i.e. all previous years). However, this is not the end of
the digital revolution. More and more "things" are now equipped with
communicating sensors - fridges, coffee machines, tooth brushes, smartphones
and smart devices. In ten years, this will connect 150 billion
"things" with each other - and with 10 billion people. This creates
the "Internet of Everything" and data volumes that double every 12
hours rather than every 12 months. How will this impact our society?

First of all, we will have an
abundance of data about our world. Data will be cheap, and Big Data analytics can
reach entirely new levels.[1] Can we soon know
everything? Can we build a Crystal Ball depicting and perhaps even predicting
the course of events?[2] Can we build
superintelligent systems to run the world in a better way, based on cybernetic control
principles?[3]
Would humans be steered by information?[4] It seems that such technologies
may now be built. For example, Baidu has started to work on a China brain
project, which will learn to predict peoples’ behaviors based on their Internet
searches.[5] China has further
initiated a project that rates the behavior of its citizens.[6] This will make loans and
jobs dependent on personal scores, which also depend on the links clicked in
the Web - and on political opinions. Is Orwell’s Big Brother coming? Or is this
the technology we need? Can the state act like a "wise king"? Or is a
state that determines, how its citizens should be happy, a despot, as Immanuel
Kant concluded?[7]

In fact, there is no scientific
method to determine the 'goal function of society' that ought to be maximized: should
it be GDP per capita, sustainability, average life span, peace, or happiness? This
is not clear and, furthermore, people are not like ants. The concept of
omni-benevolence can't work, because people pursue different goals, have different
conceptions of good life. On the one hand, their pluralism results from social
specialization, economic differentiation and cultural development. On the other
hand, such pluralism hedges the risks to society and increases its ability to
master unexpected disruptions. Consequently, as the complexity of a society
increases, pluralism needs to increase as well.

The concepts of top-down
optimization and control are limited by a number of factors: (1) Data volume
grows faster than the processing power. A growing share of data will never be
processed. This creates a "flashlight effect": we may see anything we
want, but we need to know what to pay attention to. However, some systems are
irreducibly complex, so every little detail can matter[8] (2) Due to limited
communication bandwidth, an even smaller fraction of data can be processed
centrally, such that a lot of local information, which is needed to produce
good solutions, is ignored by a centralized optimization attempt. (3) Systemic
complexity can prevent real-time optimization, such that decentralized control
approaches may perform better. This has been shown for self-organized traffic
lights, which are flexibly and efficiently controlled by local traffic flows,
while traffic control centers often fail to control traffic flows well.[9] (4) Further problems may
be caused by overfitting, spurious correlations, meaningless patterns, noise
and related classification errors - problems which are quite common in Big Data
analytics. Another concern is that powerful information systems are attractive
to organized criminals, terrorists and extremists, so they would sooner or
later be corrupted or hacked.

To unleash the value of Big Data, it
often takes theoretical models to look at the data in a useful way, as it is
done in experiments at CERN's elementary particle accelerator (which just keeps
the 0.1 percent of all measurement data - the data that are actually needed to
test a particular theoretical prediction). A similar finding is made when
trying to predict epidemic spread: a model-based analysis with little data is
more powerful than brute force Big Data analytics such as Flu Trends.[10] Therefore, Michael Macy
recently concluded: "Big Data is the beginning of theory, not the
end", and most experts agree. This is in sharp contrast to Chris Anderson's
earlier claim that "The data deluge makes the scientific method
obsolete."[11]

Some might say that Singapore, which
considers itself a "social laboratory",[12] is a good example for a
country that has greatly benefited from data-driven decision-making. Western
democracies envy the country for its quick development and economic growth rate,
but we must also consider that Singapore has been a tax haven, and it largely profits
from imported innovations originating in predominantly Western democracies. Moreover,
the political party in power has steadily lost votes over the past years in spite of all its successes. This is irritating, and we should therefore listen
to Geoffrey West, the former president of the Santa Fe Institute, who studied
cities extensively. He points out that the country of Singapore is run like a
company. However, 40-50 percent of the Top 500 companies disappear in a time
period of just 10 years, while cities persist for hundreds of years due to their
usually more inclusive governance approach. The reason for this is that even powerful
decision-makers make mistakes, but when this happens, the mistakes tend to be
big.

Where do we stand today? Big Data
analytics is far from being able to understand the complexity of human
behavior, but it is advanced enough to manipulate our decisions by
individualized information such as personalized ads or nudging. Such approaches
use a few thousand metadata that have been collected about every one of us.
However, manipulating our decision doesn't seem to be a good idea, because it
undermines the "wisdom of crowds" - an effect on which the
functionality of democracies and financial markets is based.[13] Moreover, manipulating
our decisions is likely to narrow down the variance of our choices, i.e.
socio-economic diversity. On the one hand, this can foster political and
societal polarization (or fragmentation).[14] On the other hand,
diversity is key for innovation, economic development, societal resilience, and
collective intelligence.[15] Losing socio-economic
diversity is equally bad as losing bio-diversity. It can cause systemic
malfunction or collapse.[16],[17]

Moreover, given that about 50
percent of today's jobs in the industrial and service sectors will be lost in
the next 10-20 years, our societies are under pressure to come up with many new
jobs in the emerging digital sector (or at least with sufficient income and
meaningful activities to give our lives a meaning).[18]

All of this calls for a
fundamentally different strategy and an entirely new approach, particularly as
we are faced with an increasing number of existential problems: an economic and
public spending crisis, financial and political instability, increasing dangers
of large-scale international conflicts or cyber wars, climate change with a
mass extinction of species, and growing antibiotic resistance, to mention just
a few of our global threats. We need to have more innovation capacity, and this
means we need to unleash the creativity of people. Diversity can help trigger
innovation, while information platforms and digital assistants can support
coordination in a diverse and culturally rich world. A participatory approach,
which allows everyone to contribute with his/her skills, ideas, and resources
(as in citizen science, for example) can mobilize the full socio-economic potential
and capacity of society. If many people are unemployed, have to do jobs that
don't fit their skills, or if they are excluded from socio-economic engagement,
the competitiveness and well-being of a country is significantly reduced.

To unleash the good side of the
digital revolution and new opportunities for everyone, we must provide useful
and trustworthy information to everyone. In the same way as we have built
public roads to promote the industrial age and public schools to fuel the
service society, we need powerful public information systems and digital
literacy to promote the digital era to come. Therefore, I propose to build a
Planetary Nervous System that creates possibilities for pluralistic data use
and opportunities for everyone to contribute to society and pursue flourishing
lives.[19] The Planetary Nervous
System would use the sensor networks behind the Internet of Things and potentially
also the sensors in our smartphones (currently about 15) to measure the world
around us and build a data commons together. The critical question is how this
can be done in a way that respects our privacy and minimizes misuse as compared
to the benefits the system would create. It is time to learn how to do this.

The Nervousnet project[20] has started to work on
this. It aims to create an open and participatory information platform such as
Wikipedia or OpenStreetMap, but for real-time data. In favor of security, scalability
and fault tolerance, Nervousnet is based on distributed data and control. It
will be run as a Citizen Web, i.e. built and managed by the users. This gives
us maximum control over the data traces we produce. Each sensor can separately be
turned on or off. External sensors (e.g. for smart home applications) can be
added. Users can also decide what data to share and how frequently to record
them. The shared data are anonymized, and they are deleted after a short period
of time.

Nervousnet invites everyone to
contribute to the creation of this powerful, but distributed and trustworthy
information platform for the age of the "Internet of Everything".[21] It is an open platform
that will allow developers to add own measurement procedures and Apps on top.
These can be scientific applications, games, or business applications. This
will allow everyone to provide data-driven services or products and establish
own companies. In other words, Nervousnet could once be a global catalyst to
create an information, innovation and production ecosystem that will produce
new jobs and societal benefits. There is still a lot to be done though. We are
currently working on end-to-end data encryption. We need to add multi-dimensional
reputation, incentive and payment systems. We also plan to add a personal data
store, as it was proposed by Sandy Pentland and others.[22]

In perspective, Nervousnet will
allow everyone to make better-informed decisions. It will offer five main
functionalities. First, it will configure the sensor network to answer specific
questions based on real-time measurements. For example, it will allow us to
quantify the externalities of the interactions around us, which will make it
possible to improve economic systems. Second, these measurements will be able
to reveal the hidden forces underlying socio-economic change and other
important intangible factors such as reputation and trust. This will fuel a
better understanding of our complex, interdependent world, as it is now studied
by Global Systems Science.[23] Third, the Planetary
Nervous System will create awareness about the problems and opportunities
around us. Fourth, it will enable self-organizing systems through real-time
feedbacks such as self-organized traffic light controls, industry-4.0-kind-of production
systems, or new solutions to socio-economic problems based on locally applied
interaction mechanisms. So, 300 years after the invention of the invisible
hand, we can finally make it work for us, by combining real-time measurements
with suitable feedbacks, as advised by complexity science and enabled by
multi-dimensional incentive and exchange systems. Finally, Nervousnet will
allow one to build digital assistants supporting collective intelligence. This
is needed to master the combinatorial complexity of our increasingly interdependent
world. So, an entirely new age with amazing new possibilities is ahead of us,
fueled by information.

It
is now within reach to build an information system that finally brings
everything together: science, politics, business, and society. We can create
self-organizing and self-improving systems with massively increased efficiency.
The approach I propose is based on participation and compatible with democratic
principles. It respects the autonomy of decision-making and supports free
entrepreneurship, while considering externalities. Therefore, I also expect benefits
for our environment and society. In particular, the information age may allow
us to reduce the level of conflict, because information is an unlimited
resource that offers endless creative possibilities. The digital economy is
everything but a zero-sum game. Information can be reproduced as often as we
like. To get more of it for us, we don't have to take it away from others. Furthermore,
considering that money is just a coordination mechanism to organize the
distribution of scarce resources, we can now build a better, multi-dimensional
money and incentive system that rewards digital co-creation. So, what are we
waiting for? Let's build the digital society together![24]

[4]A.D.I.
Kramer, J.E. Guillory, and J.T. Hancock, Experimental evidence of massive-scale
emotional contagion through social networks. Proceedings of the National
Academy of Sciences of the United States of America 111, 8788-8790. This
experiment was highly controversial, see
http://www.wsj.com/articles/furor-erupts-overfacebook-experiment-on-users-1404085840

[13] J.
Lorenz, H. Rauhut, F. Schweitzer, and D. Helbing (2011) How social influence
can undermine the wisdom of crowd effect. Proceedings of the National Academy
of Sciences USA (PNAS) 108(28), 9020-9025

[14]C.
Andris et al., The Rise of Partisanship and
Super-Cooperators in the U.S. House of Representatives, PLoS ONE 10(4):
e0123507, see
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0123507

[15] S.E.
Page, The Difference: How the Power of Diversity Creates Better Groups, Firms,
Schools, and Societies (Princeton University, 2008)

Thursday, 13 August 2015

When I started the publication project with Herbert Gintis on the "homo socialis" (Gintis and Helbing 2014), the most important motive for me was to trigger a scientific debate. So, from my perspective, our joint paper on the "homo socialis" is not to be seen as an end point or eternal truth, but as the starting point of a new theory for socio-economic systems. In this comment, I will expand on my paper with Herbert Gintis, and I will use the opportunity to present some further thoughts and materials.

Evolution of the "homo socialis"

Since my PhD days, I have wondered how it was possible that psychology, sociology and economics were all claiming to model the decision-making of people, while at the same time using (at least partly) different sets of models and assumptions. So, overcoming the divide between economics and the other social sciences seemed necessary (Eckel and Sell 2014), and this has been part of my research agenda ever since. My collaboration project with Herbert Gintis was born out of a project with Thomas Grund and Christian Waloszek that was part of this agenda, where we put the "homo economicus" to the test. Thomas, Christian and I simulated the evolutionary dynamics that is sometimes claimed to be the reason for the existence of the "homo economicus." Our computer simulation model distinguished utilities from payoffs and made four assumptions, none of which directly implied other-regarding behavior (Grund, Waloszek and Helbing 2013; Helbing 2013a):

Agents decide according to a best-response rule that strictly maximizes their utility function, given the behaviors of their interaction partners (their neighbors).

The utility function considers not only the own payoff, but gives a certain weight to the payoff of the interaction partner(s). The weight is called the “friendliness” and set to zero for everyone at the beginning of the simulation.

Friendliness is a trait that is inherited (either genetically or by education) to offspring. The likelihood to have an offspring increases exclusively with the own payoff, not the utility function. The payoff is assumed to be zero, when a friendly agent is exploited by all neighbors (i.e. if they all defect). Therefore, such agents will have no offspring.

The inherited friendliness value tends to be that of the parent. There is also a certain mutation rate, but it does not promote significant levels of friendliness.

What did our computer simulations of the biological evolution of utility maximizing agents tell us? For many parameter combinations, the outcome was indeed a "homo economicus," as most economists would expect. Surprisingly, however, there was also an area of the parameter space, where a "homo socialis" with other-regarding preferences emerged, namely when offspring grew up next to their parents (see Figure 1). Given that most humans actually do raise their children at home, this is quite intriguing. It is also interesting that, while in the beginning of our agent-based computer simulations other-regarding preferences are disadvantageous, they can achieve higher payoffs after several dozen generations.

Figure 1: Outcome of an evolutionary simulation of human preferences (from Grund et al. 2013). When offspring are raised close to their parents, we find not only other-regarding behavior (cooperation), but also the emergence of a "homo socialis" with other-regarding preferences. This provides a theory explaining experimental findings on fairness preferences, conditionally cooperative behavior, and individual utility functions (Fischbacher, Gächter, and Fehr, 2001). The results of the computer simulation further prove that the consideration of “externalities” (i.e., of external effects of decisions and actions) can yield a better system performance and benefit everyone, which hints towards superior organization principles for economies, as they now become possible by the Internet of Things with emergent sensor networks that will make it possible to measure externalities of all kinds (see http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2583391 and http://futurict.blogspot.ch/2014/09/creating-making-planetary-nervous.html). Remarkably, with the "homo socialis," there exists a second reference point besides the "homo economicus" that an analytical economic theory could be built around. So, is it possible that economic theory was developed around the wrong reference point? Indeed, many human interactions are indicative of the "homo socialis" rather than the "homo economicus." Moreover, could it be that the difference between the behavior of the "homo economicus" and the "homo socialis" is so big that it cannot anymore be treated as a small deviation from the "homo economicus" – an approximation error, which averages out over sufficiently many decisions? If so, the average behavior would effectively deviate from mainstream economic theory, and we would need a new economic theory, and new economic institutions as well (see, for example, Helbing, 2013a – in particular the discussion of the social preference literature relating to economic laboratory experiments). In fact, in an ever more networked world, where consumers interact and buy products through social media, the concept of separate decision-making is decreasingly plausible. If decisions become more interdependent than they used to be, theory must increasingly account for the implications of "networked minds," and representative agent theory will have to be replaced by theories of complex dynamical systems (Helbing and Kirman 2013). As Gallegati (2014) points out: "sociality implies interaction, which produces externalities." And as Lewis (2014) underlines, Hayek noticed already early on that economic systems should be studied as complex systems. This would have to include explanations of emergent collective phenomena and novel system properties resulting from individual-level interactions as mentioned by Lewis (2014) and Nowak et al. (2014). The question is: are these just gradual improvements or will the implications of complex social interdependencies be as exciting as the discovery of quantum mechanics or of the theory of relativity? Is economic theory perhaps at a turning point? One might, of course, argue that rational choice theory has already been adapted long ago to account for individual preferences. This is reflected by individual utility functions. However, it is too simple to say that economics is the study of choice under constraints with given preferences, and to leave it to sociologists to explain the individual preferences. As some of the comments to our paper have rightly pointed out (Hechter 2014; Hodgson 2014; Isaac 2014), without a theory how to theoretically determine these individual preferences, ideally in advance, rational choice theory is pretty incomplete and of limited use. But I believe that a theory describing how individual preferences and utility functions come about can actually be formulated. In fact, in the study of Grund et al. (2013), individual utility functions are an outcome of an evolutionary process, and they are a result of interactions in the past. Having said this, let me respond to some of the comments to the paper of Herbert Gintis and myself, as I lay out further ideas on the evolution of human decision-making and its – as I believe – rather interesting implications. The replies to our paper contain many thoughtful comments, and I agree with many of them. They have highlighted different aspects that certainly deserve attention in the further debate about a core analytical theory for the social sciences. The great majority of these points actually turn out to have played important roles in my email exchange with Herbert Gintis when we worked on our common paper. Apparently not all of these points made it into our paper, but this gives me an excellent opportunity to present them here.

Limitations of equilibrium theory

The question to what extent economic systems can be assumed to be in equilibrium, has been in the center of scientific debates (Ormerod 2014; Witt 2014). I have been questioning equilibrium approaches myself (Helbing and Balietti 2010). In fact, they may not always be suitable to describe (decisions and learning in) quickly changing environments. Therefore, my paper with Herbert Gintis certainly does not want to imply that equilibrium can always be assumed. It just likes to say that the analysis of stationary points can be insightful, and that the classical equilibrium concept can be extended in ways that consider social aspects. Generally, however, a system of equations of the kind Fk(x1,...,xi,...xn)=0 with a solution (y1,...,yi,...,yn) may just reflect the stationary state of a dynamical set of equations dxk/dt= Fk(x1,...,xi,...xn). In such a case, it makes sense to determine the eigenvalues of the matrix with the elements dFk/dxi in the stationary point (y1,...,yi,...yn). If all of these eigenvalues are negative (or have negative real values), deviations from the stationary point (y1,...,yi,...yn) as they may be caused by perturbations of the system, would tend to decrease over time. Consequently, the system would be driven towards the stationary point (y1,...,yi,...yn) – at least, if there is just one stationary solution, or if the perturbation is sufficiently small. In this case, the system will be usually well described by its equilibrium (y1,...,yi,...yn). However, if at least one of the eigenvalues is positive (or has a positive real value), the system will eventually be driven away from the stationary solution (y1,...,yi,...yn), and it might end up in a different stationary solution. In systems of non-linear dynamical equations, non-stationary behaviors such as oscillatory or chaotic solutions can be possible as well, which is well-known from complexity theory (Haken 2012; Nowak, Andersen, and Borkowski 2014). Moreover, even if (the real values of) all eigenvalues are negative (i.e. all variables tend to follow a damped dynamics) such that the system is expected to behave stable, it might happen that new perturbations occur before previous ones have disappeared. A nice example for this effect, which is sometimes called "convective instability," is the "bullwhip effect" that is sometimes observed in supply chains (Helbing and Lämmer 2005). A similar effect might be relevant for financial markets (where it may create bubbles or crashes) and for other socio-economic systems experiencing high innovation rates. In fact, non-equilibrium behaviors of socio-economic systems are common. A socio-economic system in equilibrium cannot produce the innovations needed to adapt well to a changing world. It is the nature of many innovations that they destabilize a previously established equilibrium and promote a new structure, process or system behavior. Innovations tend to increase diversity, and diversity tends to accelerate innovation (Helbing, Treiber, and Saam 2005). Moreover, a diverse economy is related with a high gross national product (Hidalgo et al. 2007; Page 2008). Innovations are, therefore, desirable. The related process of differentiation is an important non-equilibrium feature of successful economies, and heterogeneity should therefore be a key ingredient of economic models (Gallegati 2014). So far, however, it is still a theoretical challenge to understand the conditions that create particular kinds of inventions, and also the conditions supporting their spreading. A model that allows one to grasp innovation as system-immanent process, considering effects of randomness, would be highly desirable. However, this is difficult because innovations may be disruptive in the sense that they do not just improve the performance of a previously existing technology or procedure, but also create entirely new quality dimensions or functionalities. As one of the comments put it, strategy spaces cannot be specified ahead of time (Wolpert 2014). Innovation is open-ended (Lewis 2014). It can transcend the existing socio-economic system and may not be captured by a closed system of equations. Therefore, certain aspects related to novelty-generation and emergence, such as "radical" (Ormerod 2014; Lewis 2014) or "fundamental" uncertainty (Helbing 2013b) – where the probabilities and/or utilities of certain events cannot be enumerated anymore – are difficult to account for. Nevertheless, evolutionary models (Helbing 1992, Young 1993, Weibull 1997; Helbing, Treiber, Saam 2005; Gintis 2009) considering mutations are trying to grasp at least some of the process of novelty-generation. But certain outcomes can only be understood by co-evolutionary processes, for which correlations are essential. Then, the common factorization assumption used to derive the mean-value equations underlying many representative agent models cannot be applied (an application would eliminate relevant emergent phenomena). The entire concept of "correlated equilibria" (or "resonant correlations", as Vernon Smith, 2014, likes to call them) would obviously not work, if correlations were not relevant.

Role of randomness

I fully agree that randomness may have significant effects (Smith 2014). For example, it may lead to the emergence of cooperation between strangers (Grund et al. 2013). The emergence of the "homo socialis" that I mentioned earlier would not occur without "errors" or "noise" (Smith 2014). In fact, the transition from the "homo economicus" to the "homo socialis" needs a coincidence of random mutations of several behaviors in a certain neighborhood. Initially, such mutations are dysfunctional and do not pay off, i.e. they turn out to be "mistakes." But beyond a certain critical group size, friendliness pays off. Another example for the relevance of noise has been given in a recent experimental paper (Mäs and Helbing 2014). There, we have shown that a deterministic micro-level theory – the myopic best response rule – describes 96 percent of all individual decisions correctly, but it surprisingly fails to reproduce the outcome of the collective dynamics. This can happen when small deviations matter, i.e. when the stationary (or "equilibrium") solution is unstable. Then, tiny perturbations can sometimes trigger dramatic amplifications through cascade effects, which may even have system-wide impacts (Helbing 2013b). Note that heterogeneity in a system may have similar implications as well (Gallegati and Kirman 1999). In such cases, local interaction effects and correlations can be so relevant that they sometimes produce very different outcomes from what a representative agent model predicts (Gallegati 2014). Interestingly, adding noise to decision models can increase their predictive power. For example, in contrast to the above-mentioned deterministic best response model, a stochastic version corresponding to the multi-nomial logit model (McFadden 1973), reproduces the distribution of macro-level experimental outcomes much better (Mäs and Helbing 2014).

An alternative foundation of decision theory

In many cases, it is possible to model the role of noise by stochastic games (Wolpert 2014) or by following a master equation approach (Weidlich 2000). The latter can also be used as an alternative starting point of choice theory (Helbing 1995). This line of thought to substantiate utility theory can be summarized as follows (Helbing 2004): Let us assume choice options x1, x2, ..., xi, ..., xn, and choice probabilities p(xi) (which may change over time). Then we can define a transformation via p(xi) = N*exp(ß*vi), where N is a normalization factor and ß is a noise parameter. This transformation with the exponential function may be justified by the logarithmic law of psycho-physics, underlying our senses or the geometric averaging that people tend to perform. There is also a relationship to the gross-canonical distribution in physics (Helbing 1995). If the parameter ß were infinity, this would correspond to a deterministic choice of the option with the highest utility, but in realistic settings, ß is finite. The values vi, which I will call utilities, can be ordered to define a preference scale, which reflects different choice probabilities. An interesting implication is the following: Let us assume a lottery choosing x1 with probability q and x2 with probability (1-q). The expected utility of this new choice option x3 would be v3 = q*v1 + (1-q)*v2. The choice probability would then be proportional to exp(ß*v3) = exp{ß*[q*v1+(1-q)*v2]} = exp(ß*q*v1)*exp[ß*(1-q)*v2] = p(x1)q*p(x2)(1-q) with p(x1)= exp(ß*v1) and p(x2)=exp(ß*v2). This is the well-established and widely used Cobb-Douglas function. In many cases, one needs, of course, to consider joint probabilities p(xi,xj)= p(xi|xj)p(xj). Then, the Bayesian formula follows directly from probability theory. We can also transform the conditional probabilities p(xi|xj) of choosing xi given xj – without limitation of generality we may write p(xi|xj)=N*exp(ß*uij). Then, uij can be split up into an asymmetric and a symmetric part: uij = sij+aij with sij=(uij+uji)/2=sji and aij=(uij-uji)/2=-aji. One possible specification of the asymmetric part would be aij=vi-vj, where vi can again be called utility. sij may be interpreted as similarity between two options xi and xj. dij=exp(-ß*sij)=dji can be used to define distances. Conditional probabilities are necessary to understand not only conditional choice (which is, for example, relevant to understand social norms), but also sequences of actions, which are part of many social roles, and they are relevant for correlated equilibria as well. Turn-taking and its evolution is a nice example for this (Helbing et al. 2005). The above foundation of utility-based decision theory has the appeal that it does not require to assume a computation or even the maximization of utility. It just assumes choice probabilities. When conditional probabilities are considered, one can also model dependencies on irrelevant alternatives and intransitive preferences scales (Isaac 2014; Ormerod 2014), such as different restaurant choices (Hodgson 2014). In fact, conditional preferences are important to understand the variability of preferences over time. For example, when we have eaten, we are not hungry anymore, and other things become more preferable. I will come back to this saturation-kind of time dependence of individual preferences below. Another nice example is a competitive game on a circle, where one gets the highest payoff, if one is a step ahead of the others (Frey and Goldstone 2013). This produces a constant forward movement. For example, in business, one always likes to be a step ahead of the competition. This causes constant change. But after a few steps, one might end up again where one started. In fact, "fashion cycles" are a well-known phenomenon (Helbing 1995).

Beyond rational choice

In agreement with some of the comments (Goldstone 2014; Nowak, Andersen, and Borkowski 2014), I am convinced that the above decision theory needs further extensions. There is a lot of evidence that evolution has equipped humans with different incentive and reward systems, for example, sexual pleasure (to ensure reproduction), possession-related satisfaction (to survive in times of crises), appreciation of novelty (to explore opportunities and risks), or empathy-related satisfaction – sympathetic fellow-feeling, as Vernon Smith (2014) calls it. These establish different motivational factors, which – I claim – should not be aggregated into a single utility function, but would be better represented by different dimensions of utility. These different utilities cannot be perfectly traded against each other, and their relative importance may change quickly, thereby changing also our preference scales. In other words, human behavior results from different drivers, which dominate for some time and then give place to another. At each point in time, depending on the respective situational context, we prioritize a certain objective – here, the concept of "self-regulation" comes in (Lindenberg 2014). The switching between diverse objectives might be imagined to work similarly to the self-controlled traffic lights we have developed to serve vehicle queues at intersections (Lämmer and Helbing 2008). This self-control approach is based on the service of the most pressing local needs. Interestingly, when the externalities on neighboring intersections are taken into account, this distributed bottom-up control even outperforms classical attempts of top-down optimization (Helbing 2013a). Let us discuss next how the apparently incompatible decision theories based on rational choice models and on the concept of decision heuristics (Gigerenzer, Todd et al. 2000; Gilovich, Griffin, and Kahneman, 2002) may be related to each other. It is plausible to me that people try to increase their different rewards (see the "hedonic goal" mentioned in Lindenberg 2014), and that they learn various heuristics for this, to improve their turnouts. It also makes sense to assume that a heuristic is selected depending on the situational context of a decision, such that framing matters (Lindenberg 2014). In contrast to the utility-maximizing approach of rational choice theory, heuristics do not necessarily result in optimal choices. However, they are time- and energy-efficient, and on average they work well, given sufficient opportunities to learn. Therefore, after a long enough learning time, the application of good heuristics would come pretty close to the maximization of a utility function. In other words, on an aggregate level, rational choice theory would be a good approximation of heuristic-based decision-making (but multiple utility dimensions for different, non-aligned reward systems and the switching between them would still have to be taken into account). In such a framework, rational decision-making may be seen as an emergent, approximate outcome, depending on the decision context (Gallegati 2014). In fact, I am convinced that we can understand the diverse reward systems as results of (co-)evolutionary processes, and that the decision heuristics and their application can be explained as a result of reinforcement learning, given certain cognitive abilities. The ERC MOMENTUM project I am currently leading is trying to elaborate such an approach, based on agent-based computer simulations of cognitive agents with a virtual brain. These simulations distinguish processes on three different time scales: (i) decision-making, (ii) learning, and (iii) biological evolution. They involve genetic inheritance under mutations and reinforcement learning in an environment, where individuals compete for different kinds of rewards and individual success influences reproduction rates and the likelihood to be imitated. The ultimate ambition of the MOMENTUM project is to explain the emergence of reward systems, individual and collective intelligence, social behavior, and culture from first principles. Furthermore, to understand collective intelligence, it is important to consider the social nature of individuals. "Networked minds" (Grund et al. 2013) allow for parallel information processing, knowledge sharing, etc. Then, not everyone has to evaluate all pieces of information relating to a certain problem (such as identifying the best insurance contract). It is enough if everyone evaluates some information and people then compare their conclusions with each other. (In fact, we don't read the details of all insurance contracts, before we choose one, but we ask some colleagues and friends we trust, and follow up some of their recommendations by further in-depth analysis. This is something not well represented by a theory of independent decision-making.) Putting it differently, collective intelligence allows individuals to process information in a distributed way, and to jointly find solutions that are better than each individual one. An important precondition for this is diversity, i.e. the fact that individuals often do not decide and behave in a representative way (Page 2008).

Gene-culture co-evolution

This brings us to the subject of gene-culture co-evolution, the understanding of which requires concepts such as cultural and multi-level selection (Lewis 2014). Determining to what extent individual preferences result from genetic inheritance as compared to cultural transmission by learning will certainly require further scientific studies. Universal facial expressions (Ekman and Friesen 1971) probably support the genetic inheritance of certain cultural abilities, but many other aspects such as religious values and beliefs may be just transmitted culturally. Imitation (Helbing 1992, 1995), teaching, and learning play a similarly important role for inheriting culture, as genetic inheritance plays it for the spreading of physiological capabilities. There must be a reason why most human offspring stay with their parents for almost two decades. This actually suggests a high relevance of cultural transmission. However, when trying to understand human behavior, the role of biology can certainly not be ignored. Evolution determines our physiological capabilities. Our brain determines our cognitive ones. Cognitive abilities influence our behavior, our social institutions, and our reproduction, hence, evolution as well. In other words, we probably have a co-evolution of physiological, cognitive and social abilities. In fact, in certain cases it is not so clear whether a behavior is genetically inherited or culturally spread. For example, is a preference for fairness and cooperation genetically or culturally inherited, or both? The capacity to speak, evolving together with language use, is an interesting example for a co-evolution between physiological and cultural abilities. I also expect that the cognitive capacity for empathy (being able to put oneself into the shoes of others, see Lindenberg 2014) is genetically transmitted, while education determines how we use it.

Importance and origin of morality

The above considerations are also relevant for another important subject, which has been highlighted by one of the comments, namely learning to self-restrain (Smith 2014). It has been rightly pointed out that "formal legal rules are insufficient to generate emergent coordinated actions; informal moral rules of promise-keeping and truth-telling are needed as well." (Lewis 2014; see also Hodgson 2014). Yes, moral judgments are not simply expressions of an individual's interests, preferences, sentiments or beliefs, but a matter of doing the right things, even if one doesn't like them. And I agree that the ability to consider moral rules is part of what makes us human. In particular, I concur with the statements that "norms are a glue of societies," as Michael Hechter (2014) points out, and that the "moral legitimacy of the legal system in the eyes of citizens is crucial" (Hodgson 2014; see also Lindenberg 2014). The theory of correlated equilibria offers a partial understanding of some of these issues. For example, it allows one to explain the emergence of social conventions (Helbing 1992, Young 1993), social norms (Helbing and Johansson 2010), or turn-taking without a "choreographer" (Helbing et al. 2005, Goldstone 2014). Generally, social conventions and social norms help to improve coordination and to reduce transaction failures (Winter, Rauhut and Helbing 2012). They may change the conditional choice probabilities or even the choice set (when norms are "internalized"). However, the concept of correlated equilibria is certainly not giving a full picture. On the one hand, contents of moral values are hard to capture by means of theories. On the other hand, norms are often stabilized ("cemented") by institutions such as police and jurisdiction, religion and culture. But can we at least understand the origin of morality by quantitative models? This is in fact the case. A partial answer is given by one of our agent-based computer simulations (Helbing et al. 2010). It studies a social dilemma situation, in which people can choose between four different strategies: (1) cooperate and punish defectors, (2) just cooperate, while avoiding costly punishment, (3) defect, or (4) defect while punishing other defectors. One may call type (1) "moral" and type (4) "immoral" or "hypocritical" behavior. The simulation outcomes for this setting, when assuming the imitation of better-performing behaviors of interaction partners, is quite interesting: When everyone interacts with everybody else or with randomly chosen interaction partners, corresponding to a representative agent model, a "tragedy of the commons" results, where most individuals defect, while defection is not punished. In contrast, in a spatial setting where everyone interacts with the direct neighbors, moral behavior can emerge, i.e. a widespread cooperation with a punishment of defectors. (Therefore, both the first- and second-order free-rider dilemmas are solved.) This is due to homophily: "birds of a feather", i.e. similar strategies, cluster together. As a consequence, moralists don't have to compete with cooperators, but interact with defectors, such that costly punishment can succeed and spread. Local interactions and the co-evolution of punishment and cooperation are key to success. But the evolution of morality has, of course, further facets: it also involves deliberation, which requires higher-level intellectual abilities; and it also concerns the evolution of particular cultures and social institutions.

Role of data and experiments

Finally, I agree with the comment that we need better data even more than better theories (Macy 2014). Therefore, the role of computational social science (Lazer et a. 2009) deserves to be stressed a lot more. I believe quick scientific progress of socio-economic theories will crucially depend on the establishment of a circular feedback between theory and empirical or experimental evidence (Eckel and Sell 2014): data allow one to validate and calibrate or even empirically derive socio-economic models, but theories can also help one to identify interesting decision experiments (Helbing and Yu 2010) and to set up better measurement processes. Besides lab and web experiments, Big Data about human activities will play a much bigger role in future socio-economic research (Conte et al. 2011). This ranges from the behavior of financial markets (Preis et al. 2012) over mobility patterns (Song et al. 2010) or daily activities (Golder and Macy 2011) to the spreading of culture (Schich et al. 2014). I also agree that we need to pay more attention to the socio-economic interaction networks (Schweitzer et al. 2009, Hechter 2014, Macy 2014, Nowak et al. 2014), as they can have dramatic influence on the system behavior (Helbing et al. 2010). The activities of my research team are, in fact, trying to bring these aspects together. For this, we have developed the Open Data Search Engine "Living Archive" (http://livingarchive.inn.ac, https://github.com/bitmorse/livingarchive), the NodeGame platform for Web experiments (http://www.nodegame.org/preview/, https://github.com/nodeGame), and the Virtual Journal platform to identify relevant scientific literature across disciplinary boundaries (http://vijo.inn.ac, https://github.com/bitmorese/vijo). These activities subscribe to an open source spirit enabling a community-based effort. The aim of the FuturICT initiative (http://www.futurict.eu) is to develop this on a global scale. We can do this together and thereby create a collective knowledge base that cuts across disciplinary boundaries. It would be great, if we could even establish a collective (problem-solving) intelligence, which goes beyond an additive approach.

FuturICT Hubs

Followers

FET Flagship Initiative

The activities leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 284709 - project 'FuturICT', a Coordination and Support Action in the Information and Communication Technologies activity area