We study the time scales associated with diffusion processes that take place on multiplex networks, i.e., on a set of networks linked through interconnected layers. To this end, we propose the construction of a supra-Laplacian matrix, which consists of a dimensional lifting of the Laplacian matrix of each layer of the multiplex network. We use perturbative analysis to reveal analytically the structure of eigenvectors and eigenvalues of the complete network in terms of the spectral properties of the individual layers. The spectrum of the supra-Laplacian allows us to understand the physics of diffusionlike processes on top of multiplex networks.

New research sheds light on how "animal personalities" - inter-individual differences in animal behaviour - can drive the collective behaviour and functioning of animal groups such as schools of fish, including their cohesion

Network neuroscience is the emerging discipline concerned with investigating the complex patterns of interconnections found in neural systems, and to identify principles with which to understand them. Within this discipline, one particularly powerful approach is network generative modeling, in which wiring rules are algorithmically implemented to produce synthetic network architectures with the same properties as observed in empirical network data. Successful models can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. Here we review the prospects and promise of generative models for network neuroscience. We begin with a primer on network generative models, with a discussion of compressibility and predictability, utility in intuiting mechanisms, and a short history on their use in network science broadly. We then discuss generative models in practice and application, paying particular attention to the critical need for cross-validation. Next, we review generative models of biological neural networks, both at the cellular and large-scale level, and across a variety of species including \emph{C. elegans}, \emph{Drosophila}, mouse, rat, cat, macaque, and human. We offer a careful treatment of a few relevant distinctions, including differences between generative models and null models, sufficiency and redundancy, inferring and claiming mechanism, and functional and structural connectivity. We close with a discussion of future directions, outlining exciting frontiers both in empirical data collection efforts as well as in method and theory development that, together, further the utility of the generative network modeling approach for network neuroscience.

Just what is information? For such an intuitive idea, its precise nature proved remarkably hard to pin down. For centuries, it seemed to hover somewhere in a half-world between the visible and the unseen, the physical and the evanescent, the enduring medium and its fleeting message. It haunted the ancients as much as it did Claude Shannon and his Bell Labs colleagues in New York and New Jersey, who were trying to engirdle the world with wires and telecoms cables in the mid-20th century.

Classic economic science is reaching the limits of its explanatory powers. Complexity science uses an increasingly larger set of different methods to analyze physical, biological, cultural, social, and economic factors, providing a broader understanding of the socio-economic dynamics involved in the development of nations worldwide. The use of tools developed in the natural sciences, such as thermodynamics, evolutionary biology, and analysis of complex systems, help us to integrate aspects, formerly reserved to the social sciences, with the natural sciences. This integration reveals details of the synergistic mechanisms that drive the evolution of societies. By doing so, we increase the available alternatives for economic analysis and provide ways to increase the efficiency of decision-making mechanisms in complex social contexts. This interdisciplinary analysis seeks to deepen our understanding of why chronic poverty is still common, and how the emergence of prosperous technological societies can be made possible. This understanding should increase the chances of achieving a sustainable, harmonious and prosperous future for humanity. The analysis evidences that complex fundamental economic problems require multidisciplinary approaches and rigorous application of the scientific method if we want to advance significantly our understanding of them. The analysis reveals viable routes for the generation of wealth and the reduction of poverty, but also reveals huge gaps in our knowledge about the dynamics of our societies and about the means to guide social development towards a better future for all.

The Wealth of Nations: Complexity Science for an Interdisciplinary Approach in EconomicsKlaus Jaffe

Most of what is called complexity is an expression of organization. Organization is not necessarily complete determinism, it is just a framework that presumes there is some rationale behind what is observed instead of pointing out the momentarily difficult one has to explain phenomena.

Emerging properties are generated by physics principles. One of the them is Constructal Law.

In particular in economics, income inequality is partly produced by the very organization of the flow of goods in configurations that are hierarchical.

The arrangements of particles and forces in granular materials and particulate matter have a complex organization on multiple spatial scales that range from local structures to mesoscale and system-wide ones. This multiscale organization can affect how a material responds or reconfigures when exposed to external perturbations or loading. The theoretical study of particle-level, force-chain, domain, and bulk properties requires the development and application of appropriate mathematical, statistical, physical, and computational frameworks. Traditionally, granular materials have been investigated using particulate or continuum models, each of which tends to be implicitly agnostic to multiscale organization. Recently, tools from network science have emerged as powerful approaches for probing and characterizing heterogeneous architectures in complex systems, and a diverse set of methods have yielded fascinating insights into granular materials. In this paper, we review work on network-based approaches to studying granular materials (and particulate matter more generally) and explore the potential of such frameworks to provide a useful description of these materials and to enhance understanding of the underlying physics. We also outline a few open questions and highlight particularly promising future directions in the analysis and design of granular materials and other particulate matter.

Asimov's three laws of robotics, which were shaped in the literary work of Isaac Asimov (1920–1992) and others, define a crucial code of behavior that fictional autonomous robots must obey as a condition for their integration into human society. While, general implementation of these laws in robots is widely considered impractical, limited-scope versions have been demonstrated and have proven useful in spurring scientific debate on aspects of safety and autonomy in robots and intelligent systems. In this work, we use Asimov's laws to examine these notions in molecular robots fabricated from DNA origami. We successfully programmed these robots to obey, by means of interactions between individual robots in a large population, an appropriately scoped variant of Asimov's laws, and even emulate the key scenario from Asimov's story “Runaround,” in which a fictional robot gets into trouble despite adhering to the laws. Our findings show that abstract, complex notions can be encoded and implemented at the molecular scale, when we understand robots on this scale on the basis of their interactions.

We define rules for cellular automata played on quasiperiodic tilings of the plane arising from the multigrid method in such a way that these cellular automata are isomorphic to Conway's Game of Life. Although these tilings are nonperiodic, determining the next state of each tile is a local computation, requiring only knowledge of the local structure of the tiling and the states of finitely many nearby tiles. As an example, we show a version of a "glider" moving through a region of a Penrose tiling. This constitutes a potential theoretical framework for a method of executing computations in non-periodically structured substrates such as quasicrystals.

We consider the problem of constructing a physical system that evolves according to some specified conditional distribution. We restrict attention to physical systems that can be modeled as a time-inhomogeneous continuous-time Markov chain (CTMC) over a finite state space, which includes many of the systems considered in stochastic thermodynamics. Examples range from constructing a logical gate to be used in a digital circuit to constructing an entire digital computer. It is known that many conditional distributions over a space Xcannot be implemented by any CTMC, even approximately. This raises the question of how they can arise in the real world. Here we focus on the case where the conditional distribution is a (single-valued) function f. Any f over a set of "visible" states X can be implemented --- if the system has access to some additional "hidden" states not in X. Motivated by engineering considerations, we consider a natural decomposition of any such CTMC into a sequence of "hidden" timesteps, demarcated by changes in the set of allowed transitions between states. We demonstrate a tradeoff between the number of hidden states and the number of hidden timesteps needed to implement any given f using a CTMC, analogous to space / time tradeoffs in theoretical computer science.

Historically, health has played an important role in network research, and vice versa (Valente, 2010). This intersection has contributed to how we understand human health as well as the development of network concepts, theory, and methods. Throughout, dynamics have featured prominently. Even when limited to static methods, the emphasis in each of these fields on providing causal explanations has led researchers to draw upon theories that are dynamic, often explicitly. Here, we elaborate a variety of ways to conceptualize the relationship between health and network dynamics, show how these possibilities are reflected in the existing literature, highlight how the articles within this special issue expand that understanding, and finally, identify paths for future research to push this intersection forward.

In a world, where Artificial Intelligence systems will decide about increasingly many issues, including life and death, how should autonomous systems faced with ethical dilemmas decide, and what is required from humans?

Social contact networks underlying epidemic processes in humans and animals are highly dynamic. The spreading of infections on such temporal networks can differ dramatically from spreading on static networks. We theoretically investigate the effects of concurrency, the number of neighbors that a node has at a given time point, on the epidemic threshold in the stochastic susceptible-infected-susceptible dynamics on temporal network models. We show that network dynamics can suppress epidemics (i.e., yield a higher epidemic threshold) when the node’s concurrency is low, but can also enhance epidemics when the concurrency is high. We analytically determine different phases of this concurrency-induced transition, and confirm our results with numerical simulations.

The electricity-eating microbes that the researchers were hunting for belong to a larger class of organisms that scientists are only beginning to understand. They inhabit largely uncharted worlds: the bubbling cauldrons of deep sea vents; mineral-rich veins deep beneath the planet’s surface; ocean sediments just a few inches below the deep seafloor. The microbes represent a segment of life that has been largely ignored, in part because their strange habitats make them incredibly difficult to grow in the lab.

Computationalism aspires to provide a comprehensive theory of life and mind. It fails in this task because it lacks the conceptual tools to address the problem of meaning. I argue that a meaningful perspective is enacted by an individual with a potential that is intrinsic to biological existence: death. Life matters to such an individual because it must constantly create the conditions of its own existence, which is unique and irreplaceable. For that individual to actively adapt, rather than to passively disintegrate, expresses a value inherent in its way of life, which is the ultimate source of more refined forms of normativity. This response to the problem of meaning will not satisfy those searching for a functionalist or logical solution, but on this view such a solution will not be forthcoming. As an intuition pump for this alternative perspective I introduce two ancient foreign worldviews that assign a constitutive role to death. Then I trace the emergence of a similar conception of mortality from the cybernetics era to the ongoing development of enactive cognitive science. Finally, I analyze why orthodox computationalism has failed to grasp the role of mortality in this constitutive way.

Life is Precious Because it is Precarious: Individuality, Mortality and the Problem of Meaning

Tom Froese

Representation and Reality in Humans, Other Living Organisms and Intelligent Machines pp 33-50Part of the Studies in Applied Philosophy, Epistemology and Rational Ethics book series (SAPERE, volume 28)

Modern physics has hit a wall in a number of areas. Some proponents of information theory believe embracing it may help us to say, sew up the rift between general relativity and quantum mechanics. Or perhaps it’ll aid in detecting and comprehending dark matter and dark energy, which combined are thought to make up 95% of the known universe. As it stands, we have no idea what they are. Ironically, some hard data is required in order to elevate information theory. Until then, it remains theoretical.

The results from urban scaling in recent years have held the promise of increased efficiency to the societies who could actively control the distribution of their cities’ size. However, little evidence exists as to the factors which influence the level of urban unevenness, as expressed by the slope of the rank-size distribution, partly because the diversity of results found in the literature follows the heterogeneity of analysis specifications. In this study, I set up a meta-analysis of Zipf’s law which accounts for technical as well as topical factors of variations of Zipf’s coefficient. I found 86 studies publishing at least one empirical estimation of this coefficient and recorded their metadata into an open database. I regressed the 1962 corresponding estimates with variables describing the study and the estimation process as well as socio-demographic variables describing the territory under enquiry. A dynamic meta-analysis was also performed to look for factors of evolution of city size unevenness. The results of the most interesting models are presented in the article, whereas all analyses can be reproduced on a dedicated online platform. The results show that on average, 40% of the variation of Zipf’s coefficients is due to the technical choices. The main other variables associated with distinct evolutions are linked to the urbanisation process rather than the process of economic development and population growth. Finally, no evidence was found to support the effectiveness of past planning actions in modifying this urban feature.

Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

Amoeba, a computer platform inspired by the Tierra system, is designed to study the generation of self-replicating sequences of machine operations (opcodes) from a prebiotic world initially populated by randomly selected opcodes. Point mutations drive opcode sequences to become more fit as they compete for memory and CPU time. Significant features of the Amoeba system include the lack of artificial encapsulation (there is no write protection) and a computationally universal opcode basis set. Amoeba now includes two additional features: pattern-based addressing and injecting entropy into the system. It was previously thought such changes would make it highly unlikely that an ancestral replicator could emerge from a fortuitous combination of randomly selected opcodes. Instead, Amoeba shows a far richer emergence, exhibiting a self-organization phase followed by the emergence of self-replicators. First, the opcode basis set becomes biased. Second, short opcode building blocks are propagated throughout memory space. Finally, prebiotic building blocks can combine to form self-replicators. Self-organization is quantified by measuring the evolution of opcode frequencies, the size distribution of sequences, and the mutual information of opcode pairs.

The spread of opinions, memes, diseases, and "alternative facts" in a population depends both on the details of the spreading process and on the structure of the social and communication networks on which they spread. One feature that can change spreading dynamics substantially is heterogeneous behavior among different types of individuals in a social network. In this paper, we explore how anti-establishment nodes (e.g., hipsters) influence spreading dynamics of two competing products. We consider a model in which spreading follows a deterministic rule for updating node states in which an adjustable fraction pHip of the nodes in a network are hipsters, who always choose to adopt the product that they believe is the less popular of the two. The remaining nodes are conformists, who choose which product to adopt by considering only which products their immediate neighbors have adopted. We simulate our model on both synthetic and real networks, and we show that the hipsters have a major effect on the final fraction of people who adopt each product: even when only one of the two products exists at the beginning of the simulations, a very small fraction of hipsters in a network can still cause the other product to eventually become more popular. Our simulations also demonstrate that a time delay τ in the knowledge of the product distribution in a population has a large effect on the final distribution of product adoptions. Our simple model and analysis may help shed light on the road to success for anti-establishment choices in elections, as such success --- and qualitative differences in final outcomes between competing products, political candidates, and so on --- can arise rather generically from a small number of anti-establishment individuals and ordinary processes of social influence on normal individuals.

Hipsters on Networks: How a Small Group of Individuals Can Lead to an Anti-Establishment MajorityJonas S. Juul, Mason A. Porter

Urban infrastructures have traditionally been mono-functional: water, sewage, and electricity are notable examples. Embedded with digital technologies, urban infrastructures have the potential to communicate with one another and become multi-functional platforms that integrate data gathering and actuation cycles. In this paper, we focus on public lighting infrastructures. Despite the technological development of lights, including LED technology, streetlights have been primarily treated as a mono-functional infrastructure. Based on case studies, we discuss the potential of reimagining streetlight infrastructure, and advance some initial proposals that focus on sensing and actuation cycles, which could transform this pervasive infrastructure into a digital urban platform.

Two generalizations of the traveling salesman problem in which sites change their position in time are presented. The way the rank of different trajectory lengths changes in time is studied using the rank diversity. We analyze the statistical properties of rank distributions and rank dynamics and give evidence that the shortest and longest trajectories are more predictable and robust to change, that is, more stable.

In the current hyperconnected era, modern Information and Communication Technology (ICT) systems form sophisticated networks where not only do people interact with other people, but also machines take an increasingly visible and participatory role. Such Human-Machine Networks (HMNs) are embedded in the daily lives of people, both for personal and professional use. They can have a significant impact by producing synergy and innovations. The challenge in designing successful HMNs is that they cannot be developed and implemented in the same manner as networks of machines nodes alone, or following a wholly human-centric view of the network. The problem requires an interdisciplinary approach. Here, we review current research of relevance to HMNs across many disciplines. Extending the previous theoretical concepts of socio-technical systems, actor-network theory, cyber-physical-social systems, and social machines, we concentrate on the interactions among humans and between humans and machines. We identify eight types of HMNs: public-resource computing, crowdsourcing, web search engines, crowdsensing, online markets, social media, multiplayer online games and virtual worlds, and mass collaboration. We systematically select literature on each of these types and review it with a focus on implications for designing HMNs. Moreover, we discuss risks associated with HMNs and identify emerging design and development trends.

Existing studies have developed different indices based on various approaches including network connectivity, delay time and flow capacity, estimating the traffic reliability states from different angles. However, these indices mainly estimate traffic reliability from single view and rarely consider the combined effect of city traffic dynamics and underlying network structure. Based on percolation theory, Li et al. has developed a traffic reliability index to address this issue (Proc. Natl. Acad. Sci. USA 112(3):669-672, 2015) [1]. Here we compare this percolation-based index with one of the well-known index - congestion delay index (CDI). Using real traffic data of Beijing and Shenzhen (two large cities in China), we compare the two indices in the macroscopic trends and microscopic extreme values. The two indices are found to indicate the state of real-time traffic reliability in different consideration. Our results can be used for better evaluation of traffic system reliability and mitigation measures of traffic jams.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.