Abstract: An achievable rate region for line networks with edge and node capacity constraints and broadcast channels (BCs) is derived. The region is shown to be the capacity region if the BCs are orthogonal, deterministic, physically degraded, or packet erasure with one-bit feedback. If the BCs are physically degraded with additive Gaussian noise then independent Gaussian inputs achieve capacity.

Abstract: Time-series (TS) are employed in a variety of academic disciplines. In this paper we focus on extracting probability density functions (PDFs) from TS to gain an insight into the underlying dynamic processes. On discussing this “extraction” problem, we consider two popular approaches that we identify as histograms and Bandt–Pompe. We use an information-theoretic method to objectively compare the information content of the concomitant PDFs.

Abstract: In recent years, there has been rapid progress on understanding Gaussian networks with multiple unicast connections, and new coding techniques have emerged. The essence of multi-source networks is how to efficiently manage interference that arises from the transmission of other sessions. Classically, interference is removed by orthogonalization (in time or frequency). This means that the rate per session drops inversely proportional to the number of sessions, suggesting that interference is a strong limiting factor in such networks. However, recently discovered interference management techniques have led to a paradigm shift that interference might not be quite as detrimental after all. The aim of this paper is to provide a review of these new coding techniques as they apply to the case of time-varying Gaussian networks with multiple unicast connections. Specifically, we review interference alignment and ergodic interference alignment for multi-source single-hop networks and interference neutralization and ergodic interference neutralization for multi-source multi-hop networks. We mainly focus on the “degrees of freedom” perspective and also discuss an approximate capacity characterization.

Abstract: The slope stability is a very important problem in geotechnical engineering. This paper presents an approach for slope reliability analysis based on the maximum-entropy method. The key idea is to implement the maximum entropy principle in estimating the probability density function. The performance function is formulated by the Simplified Bishop’s method to estimate the slope failure probability. The maximum-entropy method is used to estimate the probability density function (PDF) of the performance function subject to the moment constraints. A numerical example is calculated and compared to the Monte Carlo simulation (MCS) and the Advanced First Order Second Moment Method (AFOSM). The results show the accuracy and efficiency of the proposed method. The proposed method should be valuable for performing probabilistic analyses.

Abstract: After more than a century since its birth, Quantum Theory still eludes our understanding. If asked to describe it, we have to resort to abstract and ad hoc principles about complex Hilbert spaces. How is it possible that a fundamental physical theory cannot be described using the ordinary language of Physics? Here we offer a contribution to the problem from the angle of Quantum Information, providing a short non-technical presentation of a recent derivation of Quantum Theory from information-theoretic principles. The broad picture emerging from the principles is that Quantum Theory is the only standard theory of information that is compatible with the purity and reversibility of physical processes.

Abstract: Theoretically, the concepts of energy, entropy, exergy and embodied energy are founded in the fields of thermodynamics and physics. Yet, over decades these concepts have been applied in numerous fields of science and engineering, playing a key role in the analysis of processes, systems and devices in which energy transfers and energy transformations occur. The research reported here aims to demonstrate, in terms of sustainability, the usefulness of the embodied energy and exergy concepts for analyzing electric devices which convert energy, particularly the electromagnet. This study relies on a dualist view, incorporating technical and environmental dimensions. The information provided by energy assessments is shown to be less useful than that provided by exergy and prone to be misleading. The electromagnet force and torque (representing the driving force of output exergy), accepted as both environmental and technical quantities, are expressed as a function of the electric current and the magnetic field, supporting the view of the necessity of discerning interrelations between science and the environment. This research suggests that a useful step in assessing the viability of electric devices in concert with ecological systems might be to view the magnetic flux density B and the electric current intensity I as environmental parameters. In line with this idea the study encompasses an overview of potential human health risks and effects of extremely low frequency electromagnetic fields (ELF EMFs) caused by the operation of electric systems. It is concluded that exergy has a significant role to play in evaluating and increasing the efficiencies of electrical technologies and systems. This article also aims to demonstrate the need for joint efforts by researchers in electric and environmental engineering, and in medicine and health fields, for enhancing knowledge of the impacts of environmental ELF EMFs on humans and other life forms.

Abstract: Infrared signature management (IRSM) has been a primary aeronautical concern for over 50 years. Most strategies and technologies are limited by the second law of thermodynamics. In this article, IRSM is considered in light of theoretical developments over the last 15 years that have put the absolute status of the second law into doubt and that might open the door to a new class of broadband IR stealth and cloaking techniques. Following a brief overview of IRSM and its current thermodynamic limitations, theoretical and experimental challenges to the second law are reviewed. One proposal is treated in detail: a high power density, solid-state power source to convert thermal energy into electrical or chemical energy. Next, second-law based infrared signature management (SL-IRSM) strategies are considered for two representative military scenarios: an underground installation and a SL-based jet engine. It is found that SL-IRSM could be technologically disruptive across the full spectrum of IRSM modalities, including camouflage, surveillance, night vision, target acquisition, tracking, and homing.

Abstract: Classical computing has well-established formalisms for specifying, refining, composing, proving, and otherwise reasoning about computations. These formalisms have matured over the past 70 years or so. Unconventional Computing includes the use of novel kinds of substrates–from black holes and quantum effects, through to chemicals, biomolecules, even slime moulds–to perform computations that do not conform to the classical model. Although many of these unconventional substrates can be coerced into performing classical computation, this is not how they “naturally” compute. Our ability to exploit unconventional computing is partly hampered by a lack of corresponding programming formalisms: we need models for building, composing, and reasoning about programs that execute in these substrates. What might, say, a slime mould programming language look like? Here I outline some of the issues and properties of these unconventional substrates that need to be addressed to find “natural” approaches to programming them. Important concepts include embodied real values, processes and dynamical systems, generative systems and their meta-dynamics, and embodied self-reference.

Abstract: Autism is a brain disorder involving social, memory, and learning deficits, that normally develops prenatally or early in childhood. Frustratingly, many research dollars have as yet failed to identify the cause of autism. While twin concordance studies indicate a strong genetic component, the alarming rise in the incidence of autism in the last three decades suggests that environmental factors play a key role as well. This dichotomy can be easily explained if we invoke a heritable epigenetic effect as the primary factor. Researchers are just beginning to realize the huge significance of epigenetic effects taking place during gestation in influencing the phenotypical expression. Here, we propose the novel hypothesis that sulfates deficiency in both the mother and the child, brought on mainly by excess exposure to environmental toxins and inadequate sunlight exposure to the skin, leads to widespread hypomethylation in the fetal brain with devastating consequences. We show that many seemingly disparate observations regarding serum markers, neuronal pathologies, and nutritional deficiencies associated with autism can be integrated to support our hypothesis.

Abstract: We show how conformal relativity is related to Brans–Dicke theory and to low-energy-effective superstring theory. Conformal relativity or the Hoyle–Narlikar theory is invariant with respect to conformal transformations of the metric. We show that the conformal relativity action is equivalent to the transformed Brans–Dicke action for ω = -3/2 (which is the border between standard scalar field and ghost) in contrast to the reduced (graviton-dilaton) low-energy-effective superstring action which corresponds to the Brans–Dicke action with ω = -1. We show that like in ekpyrotic/cyclic models, the transition through the singularity in conformal cosmology in the string frame takes place in the weak coupling regime. We also find interesting self-duality and duality relations for the graviton-dilaton actions.

Abstract: To understand the accelerating universe discovered observationally in 1998, we develop the scalar-tensor theory of gravitation originally due to Jordan, extended only minimally. The unique role of the conformal transformation and frames is discussed particularly from a physical point of view. We show the theory to provide us with a simple and natural way of understanding the core of the measurements, Λobs ∼ t0−2 for the observed values of the cosmological constant and today’s age of the universe both expressed in the Planckian units. According to this scenario of a decaying cosmological constant, Λobs is this small only because we are old, not because we fine-tune the parameters. It also follows that the scalar field is simply the pseudo Nambu–Goldstone boson of broken global scale invariance, based on the way astronomers and astrophysicists measure the expansion of the universe in reference to the microscopic length units. A rather phenomenological trapping mechanism is assumed for the scalar field around the epoch of mini-inflation as observed, still maintaining the unmistakable behavior of the scenario stated above. Experimental searches for the scalar field, as light as ∼ 10−9 eV, as part of the dark energy, are also discussed.