The emerging field of computational social science is attracting mathematically inclined scientists in ever-increasing numbers. This, in turn, is spurring the creation of academic departments and prompting companies such as the social-network giant Facebook, (...) to establish research teams to understand the structure of their networks and how information spreads across them.

Here, it is proposed that thinking on a different level is required to understand, what really triggers or removes this barrier. Just counting “dimensions” or “variables” is insufficient. The true intrinsic curse we are facing is the curse of instability. In fact, we argue below that instabilities (a) cause an increase in dimensionality, (b) substantially raise the analytical difficulty, and (c) are a strong indicator for multiscale dynamical complexity. Of course, it turns out that (a)–(c) are intimately related. Although we shall primarily illustrate the concepts with examples arising in mathematics and closely related disciplines, it will be shown that the abstract concept occurs, independently, across disciplines. In fact, we shall see that the curse of instability has already implicitly triggered the emergence of entirely new scientific disciplines. Furthermore, it may lead to formulate more concrete guiding principles to address the complexity challenges of the 21st century.

Data from social media are providing unprecedented opportunities to investigate the processes that rule the dynamics of collective social phenomena. Here, we consider an information theoretical approach to define and measure the temporal and structural signatures typical of collective social events as they arise and gain prominence. We use the symbolic transfer entropy analysis of micro-blogging time series to extract directed networks of influence among geolocalized sub-units in social systems. This methodology captures the emergence of system-level dynamics close to the onset of socially relevant collective phenomena. The framework is validated against a detailed empirical analysis of five case studies. In particular, we identify a change in the characteristic time-scale of the information transfer that flags the onset of information-driven collective phenomena. Furthermore, our approach identifies an order-disorder transition in the directed network of influence between social sub-units. In the absence of a clear exogenous driving, social collective phenomena can be represented as endogenously-driven structural transitions of the information transfer network. This study provides results that can help define models and predictive algorithms for the analysis of societal events based on open source data.

Ecological interactions are highly diverse even when considering a single species: the species might feed on a first, disperse the seeds of a second, and pollinate a third. Here we extend the group model, a method for identifying broad patterns of interaction across a food web, to networks which contain multiple types of interactions. Using this new method, we ask whether the traditional approach of building a network for each type of interaction (food webs for consumption, pollination webs, seed-dispersal webs, host-parasite webs) can be improved by merging all interaction types in a single network. In particular, we test whether combining different interaction types leads to a better definition of the roles species play in ecological communities. We find that, although having more information necessarily leads to better results, the improvement is only incremental if the linked species remain unchanged. However, including a new interaction type that attaches new species to the network substantially improves performance. This method provides insight into possible implications of merging different types of interactions and allows for the study of coarse-grained structure in any signed network, including ecological interaction webs, gene regulation networks, and social networks.

Coevolutionary interactions are thought to have spurred the evolution of key innovations and driven the diversification of much of life on Earth. However, the genetic and evolutionary basis of the innovations that facilitate such interactions remains poorly understood. We examined the coevolutionary interactions between plants (Brassicales) and butterflies (Pieridae), and uncovered evidence for an escalating evolutionary arms-race. Although gradual changes in trait complexity appear to have been facilitated by allelic turnover, key innovations are associated with gene and genome duplications. Furthermore, we show that the origins of both chemical defenses and of molecular counter adaptations were associated with shifts in diversification rates during the arms-race. These findings provide an important connection between the origins of biodiversity, coevolution, and the role of gene and genome duplications as a substrate for novel traits.

After growing up together, and mostly growing apart in the second half of the 20th century, the fields of artificial intelligence (AI), cognitive science, and neuroscience are reconverging on a shared view of the computational foundations of intelligence that promotes valuable cross-disciplinary exchanges on questions, methods, and results. We chart advances over the past several decades that address challenges of perception and action under uncertainty through the lens of computation. Advances include the development of representations and inferential procedures for large-scale probabilistic inference and machinery for enabling reflection and decisions about tradeoffs in effort, precision, and timeliness of computations. These tools are deployed toward the goal of computational rationality: identifying decisions with highest expected utility, while taking into consideration the costs of computation in complex real-world problems in which most relevant calculations can only be approximated. We highlight key concepts with examples that show the potential for interchange between computer science, cognitive science, and neuroscience.

Many models proposed to study the evolution of collective action rely on a formalism that represents social interactions as n-player games between individuals adopting discrete actions such as cooperate and defect. Despite the importance of spatial structure in biological collective action, the analysis of n-player games games in spatially structured populations has so far proved elusive. We address this problem by considering mixed strategies and by integrating discrete-action n-player games into the direct fitness approach of social evolution theory.

This study characterized double-gene deletion mutants of E. coli with the aim of investigating the sub-optimal physiology of the mutants and the possible roles of latent reactions. It considered, in particular, the effect of the order of the gene deletions on the growth rates and substrate uptake rates of the double-gene deletion mutants. The results indicate that the order in which genes are deleted determines the phenotype of the mutants during the sub-optimal growth phase. The mechanism behind the difference between the observed phenotypes was elucidated using transcriptomic analysis and constraint-based modeling of the mutants.

The high population density in cities confers many advantages, including improved social interaction and information exchange. However, it is often argued that urban living comes at the expense of reducing happiness. The goal of this research is to shed light on the relationship between urban communication and urban happiness. We analyze geo-located social media posts (tweets) within a major urban center (Milan) to produce a detailed spatial map of urban sentiments. We combine this data with high-resolution mobile communication intensity data among different urban areas. Our results reveal that happy (respectively unhappy) areas preferentially communicate with other areas of their type. This observation constitutes evidence of homophilous communities at the scale of an entire city (Milan), and has implications on interventions that aim to improve urban well-being.

The detection and characterization of self-organized criticality (SOC), in both real and simulated data, has undergone many significant revisions over the past 25 years. The explosive advances in the many numerical methods available for detecting, discriminating, and ultimately testing, SOC have played a critical role in developing our understanding of how systems experience and exhibit SOC. In this article, methods of detecting SOC are reviewed; from correlations to complexity to critical quantities. A description of the basic autocorrelation method leads into a detailed analysis of application-oriented methods developed in the last 25 years. In the second half of this manuscript space-based, time-based and spatial-temporal methods are reviewed and the prevalence of power laws in nature is described, with an emphasis on event detection and characterization. The search for numerical methods to clearly and unambiguously detect SOC in data often leads us outside the comfort zone of our own disciplines - the answers to these questions are often obtained by studying the advances made in other fields of study. In addition, numerical detection methods often provide the optimum link between simulations and experiments in scientific research. We seek to explore this boundary where the rubber meets the road, to review this expanding field of research of numerical detection of SOC systems over the past 25 years, and to iterate forwards so as to provide some foresight and guidance into developing breakthroughs in this subject over the next quarter of a century.

Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary suggest that, despite optimizing the wrong objective function, the model is able to extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.

The oft-repeated claim that Earth’s biota is entering a sixth “mass extinction” depends on clearly demonstrating that current extinction rates are far above the “background” rates prevailing between the five previous mass extinctions. Earlier estimates of extinction rates have been criticized for using assumptions that might overestimate the severity of the extinction crisis. We assess, using extremely conservative assumptions, whether human activities are causing a mass extinction. First, we use a recent estimate of a background rate of 2 mammal extinctions per 10,000 species per 100 years (that is, 2 E/MSY), which is twice as high as widely used previous estimates. We then compare this rate with the current rate of mammal and vertebrate extinctions. The latter is conservatively low because listing a species as extinct requires meeting stringent criteria. Even under our assumptions, which would tend to minimize evidence of an incipient mass extinction, the average rate of vertebrate species loss over the last century is up to 100 times higher than the background rate. Under the 2 E/MSY background rate, the number of species that have gone extinct in the last century would have taken, depending on the vertebrate taxon, between 800 and 10,000 years to disappear. These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way. Averting a dramatic decay of biodiversity and the subsequent loss of ecosystem services is still possible through intensified conservation efforts, but that window of opportunity is rapidly closing.

In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call “expansion entropy,” and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.

Microbial communities associated with animals and plants (i.e., microbiomes) are implicated in the day-to-day functioning of their hosts. However, we do not yet know how these host-microbiome associations evolve. In this paper, we develop a computational framework for modelling the evolution of microbiomes. The models we use are neutral, and assume that microbes have no effect on the reproductive success of the hosts. Therefore, the patterns of microbiome diversity that we obtain in our simulations require a minimal set of assumptions relating to how microbes are acquired and how they are assembled in the environment. Despite the simplicity of our models, they help us understand the patterns seen in empirical data, and they allow us to build more complex hypotheses of host-microbe dynamics.

Is it possible to predict how individuals will perform before the teamwork begins? Research by former cyclist Hugh Trenchard and others suggests that the mathematics of pelotons– the groups and bunches that cyclists form during a race – could be key to understanding how cyclists behave as a collective entity.While these collective dynamics may not tell us who will win the Tour de France, they do have broader applications to a variety of other biological systems. Here, Trenchard tells us more about his research, and how it might even provide some clues to the origin of life.

The morphology of urban agglomeration is studied here in the context of information exchange between different spatio-temporal scales. Urban migration to and from cities is characterised as non-random and following non-random pathways. Cities are multidimensional non-linear phenomena, so understanding the relationships and connectivity between scales is important in determining how the interplay of local/regional urban policies may affect the distribution of urban settlements. In order to quantify these relationships, we follow an information theoretic approach using the concept of Transfer Entropy. Our analysis is based on a stochastic urban fractal model, which mimics urban growing settlements and migration waves. The results indicate how different policies could affect urban morphology in terms of the information generated across geographical scales.

In studying fundamental physical limits and properties of computational processes, one is faced with the challenges of interpreting primitive information-processing functions through well-defined information-theoretic as well as thermodynamic quantities. In particular, transfer entropy, characterizing the function of computational transmission and its predictability, is known to peak near critical regimes. We focus on a thermodynamic interpretation of transfer entropy aiming to explain the underlying critical behavior by associating information flows intrinsic to computational transmission with particular physical fluxes. Specifically, in isothermal systems near thermodynamic equilibrium, the gradient of the average transfer entropy is shown to be dynamically related to Fisher information and the curvature of system's entropy. This relationship explicitly connects the predictability, sensitivity, and uncertainty of computational processes intrinsic to complex systems and allows us to consider thermodynamic interpretations of several important extreme cases and trade-offs.

Classic life history models are often based on optimization algorithms, focusing on the adaptation of survival and reproduction to the environment, while neglecting frequency dependent interactions in the population. Evolutionary game theory, on the other hand, studies frequency dependent strategy interactions, but usually omits life history and the demographic structure of the population. Here we show how an integration of both aspects can substantially alter the underlying evolutionary dynamics.

Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth’s surface; however, in modern contagions long-range edges—for example, due to airline transportation or communication media—allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct ‘contagion maps’ that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

In his 1942 short story 'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics — engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance. In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. What if a vehicle's efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?

The concerns about the invisibility of what robots determine can better be resolved by enabling communications between robots and humans. By extension of the 'machine learning' approach, 'learning human language' seems worthwhile being attempted :)

It is common practice to partition complex workflows into separate channels in order to speed up their completion times. When this is done within a distributed environment, unavoidable fluctuations make individual realizations depart from the expected average gains. We present a method for breaking any complex workflow into several workloads in such a way that once their outputs are joined, their full completion takes less time and exhibit smaller variance than when running in only one channel. We demonstrate the effectiveness of this method in two different scenarios; the optimization of a convex function and the transmission of a large computer file over the Internet.

Cascades in multiplex financial networks with debts of different seniority

The seniority of debt, which determines the order in which a bankrupt institution repays its debts, is an important and sometimes contentious feature of financial crises, yet its impact on systemwide stability is not well understood. We capture seniority of debt in a multiplex network, a graph of nodes connected by multiple types of edges. Here an edge between banks denotes a debt contract of a certain level of seniority. Next we study cascading default. There exist multiple kinds of bankruptcy, indexed by the highest level of seniority at which a bank cannot repay all its debts. Self-interested banks would prefer that all their loans be made at the most senior level. However, mixing debts of different seniority levels makes the system more stable in that it shrinks the set of network densities for which bankruptcies spread widely. We compute the optimal ratio of senior to junior debts, which we call the optimal seniority ratio, for two uncorrelated Erdős-Rényi networks. If institutions erode their buffer against insolvency, then this optimal seniority ratio rises; in other words, if default thresholds fall, then more loans should be senior. We generalize the analytical results to arbitrarily many levels of seniority and to heavy-tailed degree distributions.

We introduce a new kind of percolation on finite graphs called jigsaw percolation. This model attempts to capture networks of people who innovate by merging ideas and who solve problems by piecing together solutions. Each person in a social network has a unique piece of a jigsaw puzzle. Acquainted people with compatible puzzle pieces merge their puzzle pieces. More generally, groups of people with merged puzzle pieces merge if the groups know one another and have a pair of compatible puzzle pieces. The social network solves the puzzle if it eventually merges all the puzzle pieces. For an Erdős–Rényi social network with n vertices and edge probability p_n, we define the critical value p_c(n) for a connected puzzle graph to be the p_n for which the chance of solving the puzzle equals 1/2. We prove that for the n-cycle (ring) puzzle, p_c(n)=Θ(1/log n), and for an arbitrary connected puzzle graph with bounded maximum degree, p_c(n)=O(1/log n) and ω(1/n^b)for any b>0. Surprisingly, with probability tending to 1 as the network size increases to infinity, social networks with a power-law degree distribution cannot solve any bounded-degree puzzle. This model suggests a mechanism for recent empirical claims that innovation increases with social density, and it might begin to show what social networks stifle creativity and what networks collectively innovate.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.