Biological competition is widely believed to result in the evolution of selfish preferences. The related concept of the ‘homo economicus’ is at the core of mainstream economics. However, there is also experimental and empirical evidence for other-regarding preferences. Here we present a theory that explains both, self-regarding and other-regarding preferences. Assuming conditions promoting non-cooperative behaviour, we demonstrate that intergenerational migration determines whether evolutionary competition results in a ‘homo economicus’ (showing self-regarding preferences) or a ‘homo socialis’ (having other-regarding preferences). Our model assumes spatially interacting agents playing prisoner's dilemmas, who inherit a trait determining ‘friendliness’, but mutations tend to undermine it. Reproduction is ruled by fitness-based selection without a cultural modification of reproduction rates. Our model calls for a complementary economic theory for ‘networked minds’ (the ‘homo socialis’) and lays the foundations for an evolutionarily grounded theory of other-regarding agents, explaining individually different utility functions as well as conditional cooperation.

How Natural Selection Can Create Both Self- and Other-Regarding Preferences, and Networked Minds

Innovation is to organizations what evolution is to organisms: it is how organizations adapt to environmental change and improve. Yet despite advances in our understanding of evolution, what drives innovation remains elusive. On the one hand, organizations invest heavily in systematic strategies to accelerate innovation. On the other, historical analysis and individual experience suggest that serendipity plays a significant role. To unify these perspectives, we analysed the mathematics of innovation as a search for designs across a universe of component building blocks. We tested our insights using data from language, gastronomy and technology. By measuring the number of makeable designs as we acquire components, we observed that the relative usefulness of different components can cross over time. When these crossovers are unanticipated, they appear to be the result of serendipity. But when we can predict crossovers in advance, they offer opportunities to strategically increase the growth of the product space.

China’s regional economic complexity is quantified by modeling 25 years’ public firm data.High positive correlation between economic complexity and macroeconomic indicators is shown.Economic complexity has explanatory power for economic development and income inequality.Multivariate regressions suggest the robustness of these results with controlling socioeconomic factors.

Coordination games provide ubiquitous interaction paradigms to frame human behavioral features, such as information transmission, conventions and languages as well as socio-economic processes and institutions. By using a dynamical approach, such as Evolutionary Game Theory (EGT), one is able to follow, in detail, the self-organization process by which a population of individuals coordinates into a given behavior. Real socio-economic scenarios, however, often involve the interaction between multiple co-evolving sectors, with specific options of their own, that call for generalized and more sophisticated mathematical frameworks. In this paper, we explore a general EGT approach to deal with coordination dynamics in which individuals from multiple sectors interact. Starting from a two-sector, consumer/producer scenario, we investigate the effects of including a third co-evolving sector that we call public. We explore the changes in the self-organization process of all sectors, given the feedback that this new sector imparts on the other two.

As usual with the Conferences on Complex Systems, apart form the main tracks of the conference, there will be two full days of satellites (Wednesday, September 20th and Thursday, September 21st). We therefore call for satellite proposals for half a day or full day events. Satellite organizers are responsible for promoting, organizing, reviewing, and scheduling their session.

Scientifically sound proposals should be less than 1000 words, including scope of the satellite, goals, tentative program (format, half-day/full day, invited speakers), estimated attendance, and organizers.

Proposals should be submitted in PDF format to satellites@ccs17.unam.mx

Statistical Physics, which was born as an attempt to explain thermodynamic properties of systems from its atomic and molecular components, has evolved into a solid body of knowledge that allows for the understanding of macroscopic collective phenomena. The tools developed by the Statistical Physics together with the Theory of Dynamical Systems are of key importance in the understanding of Complex Systems which are characterized by the emergent and collective phenomena of many interacting units. While the basic body of knowledge of Statistical Physics and Dynamical Systems is well described in textbooks at undergraduate or master level, the applications to open problems in the context of Complex Systems are well beyond the scope of those textbooks. Aiming at bridging this gap the Topical Group on Statistical and Non Linear Physics (GEFENOL) of the Royal Spanish Physical Society is promoting the Summer School on Statistical Physics of Complex Systems series, open to PhD students and young postdocs world-wide.Following the spirit and concept of precedent succesful editions (Palma de Mallorca 2011, 2013, 2014, Benasque 2012, Barcelona 2015 and Pamplona 2016) the 7th edition will take place from June 19 to 30, 2017. During these two weeks there will be a total of six courses

Collective intelligence is the ability of a group to perform more effectively than any individual alone. Diversity among group members is a key condition for the emergence of collective intelligence, but maintaining diversity is challenging in the face of social pressure to imitate one's peers. We investigate the role incentives play in maintaining useful diversity through an evolutionary game-theoretic model of collective prediction. We show that market-based incentive systems produce herding effects, reduce information available to the group and suppress collective intelligence. In response, we propose a new incentive scheme that rewards accurate minority predictions, and show that this produces optimal diversity and collective predictive accuracy. We conclude that real-world systems should reward those who have demonstrated accuracy when majority opinion has been in error.

Transfer entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. Originally formulated in discrete time, we provide a framework for considering transfer entropy in continuous time systems. By appealing to a measure theoretic formulation we generalise transfer entropy, describing it in terms of Radon-Nikodym derivatives between measures of complete path realisations. The resulting formalism introduces and emphasises the idea that transfer entropy is an expectation of an individually fluctuating quantity along a path, in the same way we consider the expectation of physical quantities such as work and heat. We recognise that transfer entropy is a quantity accumulated over a finite time interval, whilst permitting an associated instantaneous transfer entropy rate. We use this approach to produce an explicit form for the transfer entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). We contrast our approach with previous attempts to formulate information flow between continuous time point processes within a discrete time framework, which incur issues that our continuous time approach naturally avoids. Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes).

Transfer entropy in continuous time, with applications to jump and neural spiking processes

Most scientists will characterize complexity as the result of one or more factors out of three: (i) high dimensionality, (ii) interaction networks, and (iii) nonlinearity. High dimensionality alone need not give rise to complexity. The best known cases come from linear algebra: To determine the eigenvalues and eigenvectors of a large quadratic matrix, for example, is complicated but not complex. Every mathematician, physicist or economist, and most scholars from other disciplines can write down an algorithm that would work provided infinite resources in computer time and storage space are given. (...)

How complexity originates: Examples from history reveal additional roots to complexityPeter SchusterComplexityDOI: 10.1002/cplx.21841

Here we sketch a new derivation of Zipf's law for word frequencies based on optimal coding. The structure of the derivation is reminiscent of Mandelbrot's random typing model but it has multiple advantages over random typing: (1) it starts from realistic cognitive pressures, (2) it does not require fine tuning of parameters, and (3) it sheds light on the origins of other statistical laws of language and thus can lead to a compact theory of linguistic laws. Our findings suggest that the recurrence of Zipf's law in human languages could originate from pressure for easy and fast communication.

Compression and the origins of Zipf's law for word frequenciesRamon Ferrer-i-Cancho

Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of techniques that have been developed in the neuroevolution community to improve performance on RL problems. To demonstrate the latter, we show that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g. DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA parallelizes better than ES, A3C, and DQN, and enables a state-of-the-art compact encoding technique that can represent million-parameter DNNs in thousands of bytes.

For someone working or trying to work in data science, statistics is probably the biggest and most intimidating area of knowledge you need to develop. The goal of this post is to reduce what you need to know to a finite number of concrete ideas, techniques, and equations.

Of course, that’s an ambitious goal — if you plan to be in data science for the long term I’d still expect to continue learning statistical concepts and techniques throughout your career. But what I’m aiming for is to provide you with a baseline to get you through your interviews and into practicing data science with as short and painless a process a possible. I’ll end each section with key terms and resources for further reading. Let’s dive in.

The mesoscopic level of brain organization, describing the organization and dynamics of small circuits of neurons including from few tens to few thousands, has recently received considerable experimental attention. It is useful for describing small neural systems of invertebrates, and in mammalian neural systems it is often seen as a middle ground that is fundamental to link single neuron activity to complex functions and behavior. However, and somewhat counter-intuitively, the behavior of neural networks of small and intermediate size can be much more difficult to study mathematically than that of large networks, and appropriate mathematical methods to study the dynamics of such networks have not been developed yet. Here we consider a model of a network of firing-rate neurons with arbitrary finite size, and we study its local bifurcations using an analytical approach. This analysis, complemented by numerical studies for both the local and global bifurcations, shows the emergence of strong and previously unexplored finite-size effects that are particularly hard to detect in large networks. This study advances the tools available for the comprehension of finite-size neural circuits, going beyond the insights provided by the mean-field approximation and the current techniques for the quantification of finite-size effects.

The flagship conference of the Complex Systems Society will go to Latin America for the first time in 2017. The Mexican complex systems community is enthusiast to welcome colleagues to one of our richest destinations: Cancun.

This book considers a relatively new measure in complex systems, transfer entropy, derived from a series of measurements, usually a time series. After a qualitative introduction and a chapter that explains the key ideas from statistics required to understand the text, the authors then present information theory and transfer entropy in depth. A key feature of the approach is the authors' work to show the relationship between information flow and complexity. The later chapters demonstrate information transfer in canonical systems, and applications, for example in neuroscience and in finance.

The book will be of value to advanced undergraduate and graduate students and researchers in the areas of computer science, neuroscience, physics, and engineering.

Transfer entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. Originally formulated in discrete time, we provide a framework for considering transfer entropy in continuous time systems. By appealing to a measure theoretic formulation we generalise transfer entropy, describing it in terms of Radon-Nikodym derivatives between measures of complete path realisations. The resulting formalism introduces and emphasises the idea that transfer entropy is an expectation of an individually fluctuating quantity along a path, in the same way we consider the expectation of physical quantities such as work and heat. We recognise that transfer entropy is a quantity accumulated over a finite time interval, whilst permitting an associated instantaneous transfer entropy rate. We use this approach to produce an explicit form for the transfer entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). We contrast our approach with previous attempts to formulate information flow between continuous time point processes within a discrete time framework, which incur issues that our continuous time approach naturally avoids. Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes).

Transfer entropy in continuous time, with applications to jump and neural spiking processes

We present an exact mathematical framework able to describe site-percolation transitions in real multiplex networks. Specifically, we consider the average percolation diagram valid over an infinite number of random configurations where nodes are present in the system with given probability. The approach relies on the locally treelike ansatz, so that it is expected to accurately reproduce the true percolation diagram of sparse multiplex networks with negligible number of short loops. The performance of our theory is tested in social, biological, and transportation multiplex graphs. When compared against previously introduced methods, we observe improvements in the prediction of the percolation diagrams in all networks analyzed. Results from our method confirm previous claims about the robustness of real multiplex networks, in the sense that the average connectedness of the system does not exhibit any significant abrupt change as its individual components are randomly destroyed.

When encountering novel object, humans are able to infer a wide range of physical properties such as mass, friction and deformability by interacting with them in a goal driven way. This process of active interaction is in the same spirit of a scientist performing an experiment to discover hidden facts. Recent advances in artificial intelligence have yielded machines that can achieve superhuman performance in Go, Atari, natural language processing, and complex control problems, but it is not clear that these systems can rival the scientific intuition of even a young child. In this work we introduce a basic set of tasks that require agents to estimate hidden properties such as mass and cohesion of objects in an interactive simulated environment where they can manipulate the objects and observe the consequences. We found that state of art deep reinforcement learning methods can learn to perform the experiments necessary to discover such hidden properties. By systematically manipulating the problem difficulty and the cost incurred by the agent for performing experiments, we found that agents learn different strategies that balance the cost of gathering information against the cost of making mistakes in different situations.

Complexity science concepts of emergence, self-organization, and feedback suggest that descriptions of systems and events are subjective, incomplete, and impermanent-similar to what we observe in quantum phenomena. Complexity science evinces an increasingly compelling alternative to reductionism for describing physical phenomena, now that shared aspects of complexity science and quantum phenomena are being scientifically substantiated. Establishment of a clear connection between chaotic complexity and quantum entanglement in small quantum systems indicates the presence of common processes involved in thermalization in large and small-scale systems. Recent findings in the fields of quantum physics, quantum biology, and quantum cognition demonstrate evidence of the complexity science characteristics of sensitivity to initial conditions and emergence of self-organizing systems. Efficiencies in quantum superposition suggest a new paradigm in which our very notion of complexity depends on which information theory we choose to employ.

• A multimodel agent-based simulation environment (PULSE) is presented.• Model integration techniques suggested: common space and commonly controlled agents.• Crowd pressure metrics for simulating crushing and asphyxia in crowds are proposed.• Simulations of evacuation from cinema building to the city streets are carried out.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.