Entropyhttp://mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://mdpi.com/journal/entropy

http://mdpi.com/1099-4300/17/8/5522
This paper outlines a thermodynamic theory of biological evolution. Beginning with a brief summary of the parallel histories of the modern evolutionary synthesis and thermodynamics, we use four physical laws and processes (the first and second laws of thermodynamics, diffusion and the maximum entropy production principle) to frame the theory. Given that open systems such as ecosystems will move towards maximizing dispersal of energy, we expect biological diversity to increase towards a level, Dmax, representing maximum entropic production (Smax). Based on this theory, we develop a mathematical model to predict diversity over the last 500 million years. This model combines diversification, post-extinction recovery and likelihood of discovery of the fossil record. We compare the output of this model with that of the observed fossil record. The model predicts that life diffuses into available energetic space (ecospace) towards a dynamic equilibrium, driven by increasing entropy within the genetic material. This dynamic equilibrium is punctured by extinction events, which are followed by restoration of Dmax through diffusion into available ecospace. Finally we compare and contrast our thermodynamic theory with the MES in relation to a number of important characteristics of evolution (progress, evolutionary tempo, form versus function, biosphere architecture, competition and fitness).Entropy2015-07-31178Article10.3390/e17085522552255481099-43002015-07-31doi: 10.3390/e17085522Keith Skenehttp://mdpi.com/1099-4300/17/8/5503
A Carnot type engine with a changing phase during the heating and the cooling is modeled with its thermal contact with the heat source. In a first optimization, the optimal high temperature of the cycle is determined to maximize the power output. The temperature and the mass flow rate of the heat source are given. This does not take into account the converter internal fluid and its mass flow rate. It is an exogenous optimization of the converter. In a second optimization, the endogenous optimization, the isothermal heating corresponds only to the vaporization of the selected fluid. The maximization of the power output gives the optimal vaporization temperature of the cycled fluid. Using these two optima allows connecting the temperature of the heat source to the working fluid used. For a given temperature level, mass flow rate and composition of the waste heat to recover, an optimal fluid and its temperature of vaporization are deduced. The optimal conditions size also the internal mass flow rate and the compression ratio (pump size). The optimum corresponds to the maximum of the power output and must be combined with the environmental fluid impact and the technological constraints.Entropy2015-07-31178Article10.3390/e17085503550355211099-43002015-07-31doi: 10.3390/e17085503Mathilde BlaiseMichel FeidtDenis Maillethttp://mdpi.com/1099-4300/17/8/5472
Current approaches to characterize the complexity of dynamical systems usually rely on state-space trajectories. In this article instead we focus on causal structure, treating discrete dynamical systems as directed causal graphs—systems of elements implementing local update functions. This allows us to characterize the system’s intrinsic cause-effect structure by applying the mathematical and conceptual tools developed within the framework of integrated information theory (IIT). In particular, we assess the number of irreducible mechanisms (concepts) and the total amount of integrated conceptual information Φ specified by a system. We analyze: (i) elementary cellular automata (ECA); and (ii) small, adaptive logic-gate networks (“animats”), similar to ECA in structure but evolving by interacting with an environment. We show that, in general, an integrated cause-effect structure with many concepts and high Φ is likely to have high dynamical complexity. Importantly, while a dynamical analysis describes what is “happening” in a system from the extrinsic perspective of an observer, the analysis of its cause-effect structure reveals what a system “is” from its own intrinsic perspective, exposing its dynamical and evolutionary potential under many different scenarios.Entropy2015-07-31178Article10.3390/e17085472547255021099-43002015-07-31doi: 10.3390/e17085472Larissa AlbantakisGiulio Tononihttp://mdpi.com/1099-4300/17/8/5450
A scoring rule is a device for evaluation of forecasts that are given in terms of the probability of an event. In this article we will restrict our attention to binary forecasts. We may think of a scoring rule as a penalty attached to a forecast after the event has been observed. Thus a relatively small penalty will accrue if a high probability forecast that an event will occur is followed by occurrence of the event. On the other hand, a relatively large penalty will accrue if this forecast is followed by non-occurrence of the event. Meteorologists have been foremost in developing scoring rules for the evaluation of probabilistic forecasts. Here we use a published meteorological data set to illustrate diagrammatically the Brier score and the divergence score, and their statistical decompositions, as examples of Bregman divergences. In writing this article, we have in mind environmental scientists and modellers for whom meteorological factors are important drivers of biological, physical and chemical processes of interest. In this context, we briefly draw attention to the potential for probabilistic forecasting of the within-season component of nitrous oxide emissions from agricultural soils.Entropy2015-07-31178Article10.3390/e17085450545054711099-43002015-07-31doi: 10.3390/e17085450Gareth HughesCairistiona Topphttp://mdpi.com/1099-4300/17/8/5437
Cuprous oxide (Cu2O) nanocubes were synthesized by reducing Cu(OH)2 in the presence of sodium citrate at room temperature. The samples were characterized in detail by field-emission scanning electron microscopy, transmission electron microscopy, high-resolution transmission electron microscopy, X-ray powder diffraction, and N2 absorption (BET specific surface area). The equations for acquiring reaction kinetic parameters and surface thermodynamic properties of Cu2O nanocubes were deduced by establishment of the relations between thermodynamic functions of Cu2O nanocubes and these of the bulk Cu2O. Combined with thermochemical cycle, transition state theory, basic theory of chemical thermodynamics, and in situ microcalorimetry, reaction kinetic parameters, specific surface enthalpy, specific surface Gibbs free energy, and specific surface entropy of Cu2O nanocubes were successfully determined. We also introduced a universal route for gaining reaction kinetic parameters and surface thermodynamic properties of nanomaterials.Entropy2015-07-30178Article10.3390/e17085437543754491099-43002015-07-30doi: 10.3390/e17085437Xingxing LiHuanfeng TangXianrui LuShi LinLili ShiZaiyin Huanghttp://mdpi.com/1099-4300/17/8/5422
Automated planning is a well-established field of artificial intelligence (AI), with applications in route finding, robotics and operational research, among others. The task of developing a plan is often solved by finding a path in a graph representing the search domain; a robust plan consists of numerous paths that can be chosen if the execution of the best (optimal) one fails. While robust planning for a single entity is rather simple, development of a robust plan for multiple entities in a common environment can lead to combinatorial explosion. This paper proposes a novel hybrid approach, joining heuristic search and the wavefront algorithm to provide a plan featuring robustness in areas where it is needed, while maintaining a low level of computational complexity.Entropy2015-07-30178Article10.3390/e17085422542254361099-43002015-07-30doi: 10.3390/e17085422Igor WojnickiSebastian ErnstWojciech Turekhttp://mdpi.com/1099-4300/17/8/5402
This paper examines modern economic growth according to the multidimensional scaling (MDS) method and state space portrait (SSP) analysis. Electing GDP per capita as the main indicator for economic growth and prosperity, the long-run perspective from 1870 to 2010 identifies the main similarities among 34 world partners’ modern economic growth and exemplifies the historical waving mechanics of the largest world economy, the USA. MDS reveals two main clusters among the European countries and their old offshore territories, and SSP identifies the Great Depression as a mild challenge to the American global performance, when compared to the Second World War and the 2008 crisis.Entropy2015-07-29178Article10.3390/e17085402540254211099-43002015-07-29doi: 10.3390/e17085402J. MachadoMaria MataAntónio Lopeshttp://mdpi.com/1099-4300/17/8/5382
Entropy-based tools are commonly used to describe the dynamics of complex systems. In the last few decades, non-extensive statistics, based on Tsallis entropy, and multifractal techniques have shown to be useful to characterize long-range interaction and scaling behavior. In this paper, an approach based on generalized Tsallis dimensions is used for the formulation of mutual-information-related dependence coefficients in the multifractal domain. Different versions according to the normalizing factor, as well as to the inclusion of the non-extensivity correction term are considered and discussed. An application to the assessment of dimensional interaction in the structural dynamics of a seismic real series is carried out to illustrate the usefulness and comparative performance of the measures introduced.Entropy2015-07-29178Article10.3390/e17085382538254011099-43002015-07-29doi: 10.3390/e17085382José AnguloFrancisco Esquivelhttp://mdpi.com/1099-4300/17/8/5353
Multivariate nonlinear mixed-effects models (MNLMM) have received increasing use due to their flexibility for analyzing multi-outcome longitudinal data following possibly nonlinear profiles. This paper presents and compares five different iterative algorithms for maximum likelihood estimation of the MNLMM. These algorithmic schemes include the penalized nonlinear least squares coupled to the multivariate linear mixed-effects (PNLS-MLME) procedure, Laplacian approximation, the pseudo-data expectation conditional maximization (ECM) algorithm, the Monte Carlo EM algorithm and the importance sampling EM algorithm. When fitting the MNLMM, it is rather difficult to exactly evaluate the observed log-likelihood function in a closed-form expression, because it involves complicated multiple integrals. To address this issue, the corresponding approximations of the observed log-likelihood function under the five algorithms are presented. An expected information matrix of parameters is also provided to calculate the standard errors of model parameters. A comparison of computational performances is investigated through simulation and a real data example from an AIDS clinical study.Entropy2015-07-29178Article10.3390/e17085353535353811099-43002015-07-29doi: 10.3390/e17085353Wan-Lun Wanghttp://mdpi.com/1099-4300/17/8/5333
Statistical modeling is often used to measure the strength of evidence for or against hypotheses about given data. We have previously proposed an information-dynamic framework in support of a properly calibrated measurement scale for statistical evidence, borrowing some mathematics from thermodynamics, and showing how an evidential analogue of the ideal gas equation of state could be used to measure evidence for a one-sided binomial hypothesis comparison (“coin is fair” vs. “coin is biased towards heads”). Here we take three important steps forward in generalizing the framework beyond this simple example, albeit still in the context of the binomial model. We: (1) extend the scope of application to other forms of hypothesis comparison; (2) show that doing so requires only the original ideal gas equation plus one simple extension, which has the form of the Van der Waals equation; (3) begin to develop the principles required to resolve a key constant, which enables us to calibrate the measurement scale across applications, and which we find to be related to the familiar statistical concept of degrees of freedom. This paper thus moves our information-dynamic theory substantially closer to the goal of producing a practical, properly calibrated measure of statistical evidence for use in general applications.Entropy2015-07-29178Article10.3390/e17085333533353521099-43002015-07-29doi: 10.3390/e17085333Veronica VielandSang-Cheol Seokhttp://mdpi.com/1099-4300/17/8/5304
It is shown that nonlinear interactions between boundary layers on adjacent corner surfaces produce deterministic stream wise spiral structures. The synchronization properties of nonlinear spectral velocity equations of Lorenz form yield clearly defined deterministic spiral structures at several downstream stations. The computational procedure includes Burg’s method to obtain power spectral densities, yielding the available kinetic energy dissipation rates within the spiral structures. The singular value decomposition method is applied to the nonlinear time series solutions yielding empirical entropies, from which empirical entropic indices are then extracted. The intermittency exponents obtained from the entropic indices allow the computation of the entropy generation through the spiral structures to the final dissipation of the fluctuating kinetic energy into background thermal energy, resulting in an increase in the entropy. The entropy generation rates through the spiral structures are compared with the entropy generation rates within an empirical turbulent boundary layer at several stream wise stations.Entropy2015-07-27178Article10.3390/e17085304530453321099-43002015-07-27doi: 10.3390/e17085304LaVar Isaacsonhttp://mdpi.com/1099-4300/17/8/5288
The standard 3 + 3 or “modified Fibonacci” up-and-down (MF-UD) method of dose escalation is by far the most used design in dose-finding cancer trials. However, MF-UD has always shown inferior performance when compared with its competitors regarding number of patients treated at optimal doses. A consequence of using less effective designs is that more patients are treated with doses outside the therapeutic window. In June 2012, the U S Food and Drug Administration (FDA) rejected the proposal to use Escalation with Overdose Control (EWOC), an established dose-finding method which has been extensively used in FDA-approved first in human trials and imposed a variation of the MF-UD, known as accelerated titration (AT) design. This event motivated us to perform an extensive simulation study comparing the operating characteristics of AT and EWOC. We show that the AT design has poor operating characteristics relative to three versions of EWOC under several practical scenarios. From the clinical investigator’s perspective, lower bias and mean square error make EWOC designs preferable than AT designs without compromising safety. From a patient’s perspective, uniformly higher proportion of patients receiving doses within an optimal range of the true MTD makes EWOC designs preferable than AT designs.Entropy2015-07-27178Article10.3390/e17085288528853031099-43002015-07-27doi: 10.3390/e17085288André RogatkoGalen Cook-WiensMourad TighiouartSteven Piantadosihttp://mdpi.com/1099-4300/17/8/5274
During competition for resources in primitive networks increased fitness of an information variant does not necessarily equate with successful elimination of its competitors. If variability is added fast to a system, speedy replacement of pre-existing and less-efficient forms of order is required as novel information variants arrive. Otherwise, the information capacity of the system fills up with information variants (an effect referred as “error catastrophe”). As the cost for managing the system’s exceeding complexity increases, the correlation between performance capabilities of information variants and their competitive success decreases, and evolution of such systems toward increased efficiency slows down. This impasse impedes the understanding of evolution in prebiotic networks. We used the simulation platform Biotic Abstract Dual Automata (BiADA) to analyze how information variants compete in a resource-limited space. We analyzed the effect of energy-related features (differences in autocatalytic efficiency, energy cost of order, energy availability, transformation rates and stability of order) on this competition. We discuss circumstances and controllers allowing primitive networks acquire novel information with minimal “error catastrophe” risks. We present a primitive mechanism for maximization of energy flux in dynamic networks. This work helps evaluate controllers of evolution in prebiotic networks and other systems where information variants compete.Entropy2015-07-27178Article10.3390/e17085274527452871099-43002015-07-27doi: 10.3390/e17085274Radu PopaVily Cimpoiasuhttp://mdpi.com/1099-4300/17/8/5257
On the basis of analyzing high-voltage direct current (HVDC) transmission system and its fault superimposed circuit, the direction of the fault components of the voltage and the current measured at one end of transmission line is certified to be different for internal faults and external faults. As an estimate of the differences between two signals, relative entropy is an effective parameter for recognizing transient signals in HVDC transmission lines. In this paper, the relative entropy of wavelet energy is applied to distinguish internal fault from external fault. For internal faults, the directions of fault components of voltage and current are opposite at the two ends of the transmission line, indicating a huge difference of wavelet energy relative entropy; for external faults, the directions are identical, indicating a small difference. The simulation results based on PSCAD/EMTDC show that the proposed pilot protection system acts accurately for faults under different conditions, and its performance is not affected by fault type, fault location, fault resistance and noise.Entropy2015-07-27178Article10.3390/e17085257525752731099-43002015-07-27doi: 10.3390/e17085257Sheng LinShan GaoZhengyou HeYujia Denghttp://mdpi.com/1099-4300/17/8/5241
The aim of the present study was to characterize the neural network reorganization during a cognitive task in schizophrenia (SCH) by means of wavelet entropy (WE). Previous studies suggest that the cognitive impairment in patients with SCH could be related to the disrupted integrative functions of neural circuits. Nevertheless, further characterization of this effect is needed, especially in the time-frequency domain. This characterization is sensitive to fast neuronal dynamics and their synchronization that may be an important component of distributed neuronal interactions; especially in light of the disconnection hypothesis for SCH and its electrophysiological correlates. In this work, the irregularity dynamics elicited by an auditory oddball paradigm were analyzed through synchronized-averaging (SA) and single-trial (ST) analyses. They provide complementary information on the spatial patterns involved in the neural network reorganization. Our results from 20 healthy controls and 20 SCH patients showed a WE decrease from baseline to response both in controls and SCH subjects. These changes were significantly more pronounced for healthy controls after ST analysis, mainly in central and frontopolar areas. On the other hand, SA analysis showed more widespread spatial differences than ST results. These findings suggest that the activation response is weakly phase-locked to stimulus onset in SCH and related to the default mode and salience networks. Furthermore, the less pronounced changes in WE from baseline to response for SCH patients suggest an impaired ability to reorganize neural dynamics during an oddball task.Entropy2015-07-27178Article10.3390/e17085241524152561099-43002015-07-27doi: 10.3390/e17085241Javier Gomez-PilarJesús PozaAlejandro BachillerCarlos GómezVicente MolinaRoberto Hornerohttp://mdpi.com/1099-4300/17/8/5218
The dynamics of brain area influenced by focal epilepsy can be studied using focal and non-focal electroencephalogram (EEG) signals. This paper presents a new method to detect focal and non-focal EEG signals based on an integrated index, termed the focal and non-focal index (FNFI), developed using discrete wavelet transform (DWT) and entropy features. The DWT decomposes the EEG signals up to six levels, and various entropy measures are computed from approximate and detail coefficients of sub-band signals. The computed entropy measures are average wavelet, permutation, fuzzy and phase entropies. The proposed FNFI developed using permutation, fuzzy and Shannon wavelet entropies is able to clearly discriminate focal and non-focal EEG signals using a single number. Furthermore, these entropy measures are ranked using different techniques, namely the Bhattacharyya space algorithm, Student’s t-test, the Wilcoxon test, the receiver operating characteristic (ROC) and entropy. These ranked features are fed to various classifiers, namely k-nearest neighbour (KNN), probabilistic neural network (PNN), fuzzy classifier and least squares support vector machine (LS-SVM), for automated classification of focal and non-focal EEG signals using the minimum number of features. The identification of the focal EEG signals can be helpful to locate the epileptogenic focus.Entropy2015-07-27178Article10.3390/e17085218521852401099-43002015-07-27doi: 10.3390/e17085218Rajeev SharmaRam PachoriU. Acharyahttp://mdpi.com/1099-4300/17/8/5199
Based on two fractional-order chaotic complex drive systems and one fractional-order chaotic complex response system with different dimensions, we propose generalized combination complex synchronization. In this new synchronization scheme, there are two complex scaling matrices that are non-square matrices. On the basis of the stability theory of fractional-order linear systems, we design a general controller via active control. Additionally, by virtue of two complex scaling matrices, generalized combination complex synchronization between fractional-order chaotic complex systems and real systems is investigated. Finally, three typical examples are given to demonstrate the effectiveness and feasibility of the schemes.Entropy2015-07-24178Article10.3390/e17085199519952171099-43002015-07-24doi: 10.3390/e17085199Cuimei JiangShutang LiuDa Wanghttp://mdpi.com/1099-4300/17/8/5171
Assuming sparsity or compressibility of the underlying signals, compressed sensing or compressive sampling (CS) exploits the informational efficiency of under-sampled measurements for increased efficiency yet acceptable accuracy in information gathering, transmission and processing, though it often incurs extra computational cost in signal reconstruction. Shannon information quantities and theorems, such as source rate-distortion, trans-information and rate distortion theorem concerning lossy data compression, provide a coherent framework, which is complementary to classic CS theory, for analyzing informational quantities and for determining the necessary number of measurements in CS. While there exists some information-theoretic research in the past on CS in general and compressive radar imaging in particular, systematic research is needed to handle issues related to scene description in cluttered environments and trans-information quantification in complex sparsity-clutter-sampling-noise settings. The novelty of this paper lies in furnishing a general strategy for information-theoretic analysis of scene compressibility, trans-information of radar echo data about the scene and the targets of interest, respectively, and limits to undersampling ratios necessary for scene reconstruction subject to distortion given sparsity-clutter-noise constraints. A computational experiment was performed to demonstrate informational analysis regarding the scene-sampling-reconstruction process and to generate phase transition diagrams showing relations between undersampling ratios and sparsity-clutter-noise-distortion constraints. The strategy proposed in this paper is valuable for information-theoretic analysis and undersampling theorem developments in compressive radar imaging and other computational imaging applications.Entropy2015-07-24178Article10.3390/e17085171517151981099-43002015-07-24doi: 10.3390/e17085171Jingxiong ZhangKe YangFengzhu LiuYing Zhanghttp://mdpi.com/1099-4300/17/8/5157
A nonlocal model for heat transfer with phonons and electrons is applied to infer the steady-state radial temperature profile in a circular layer surrounding an inner hot component. Such a profile, following by the numerical solution of the heat equation, predicts that the temperature behaves in an anomalous way, since for radial distances from the heat source smaller than the mean-free path of phonons and electrons, it increases for increasing distances. The compatibility of this temperature behavior with the second law of thermodynamics is investigated by calculating numerically the local entropy production as a function of the radial distance. It turns out that such a production is positive and strictly decreasing with the radial distance.Entropy2015-07-24178Article10.3390/e17085157515751701099-43002015-07-24doi: 10.3390/e17085157Vito CimmelliIsabella CarlomagnoAntonio Sellittohttp://mdpi.com/1099-4300/17/8/5145
Black hole solutions in pure quadratic theories of gravity are interesting since they allow the formulation of a set of scale-invariant thermodynamics laws. Recently, we have proven that static scale-invariant black holes have a well-defined entropy, which characterizes equivalent classes of solutions. In this paper, we generalize these results and explore the thermodynamics of rotating black holes in pure quadratic gravity.Entropy2015-07-23178Article10.3390/e17085145514551561099-43002015-07-23doi: 10.3390/e17085145Guido CognolaMassimiliano RinaldiLuciano Vanzohttp://mdpi.com/1099-4300/17/7/5133
A hypsometric map is a type of map used to represent topographic characteristics by filling different map areas with diverging colors. The setting of appropriate diverging colors is essential for the map to reveal topographic details. When lunar real environmental exploration programs are performed, large-scale hypsometric maps with a high resolution and greater topographic detail are helpful. Compared to the situation on Earth, fewer lunar exploration objects are available, and the topographic waviness is smaller at a large scale, indicating that presenting the topographic details using traditional hypsometric map-making methods may be difficult. To solve this problem, we employed the Chang’E2 (CE2) topographic and imagery data with a resolution of 7 m and developed a new hypsometric map-making method by setting the diverging colors based on information entropy. The resulting map showed that this method is suitable for presenting the topographic details and might be useful for developing a better understanding of the environment of the lunar surface.Entropy2015-07-22177Article10.3390/e17075133513351441099-43002015-07-22doi: 10.3390/e17075133Xingguo ZengLingli MuJianjun LiuYiman Yanghttp://mdpi.com/1099-4300/17/7/5117
A path analysis method for causal systems based on generalized linear models is proposed by using entropy. A practical example is introduced, and a brief explanation of the entropy coefficient of determination is given. Direct and indirect effects of explanatory variables are discussed as log odds ratios, i.e., relative information, and a method for summarizing the effects is proposed. The example dataset is re-analyzed by using the method.Entropy2015-07-22177Article10.3390/e17075117511751321099-43002015-07-22doi: 10.3390/e17075117Nobuoki EshimaMinoru TabataClaudio BorroniYutaka Kanohttp://mdpi.com/1099-4300/17/7/5101
In applied investigations, the invariance of the Lyapunov dimension under a diffeomorphism is often used. However, in the case of irregular linearization, this fact was not strictly considered in the classical works. In the present work, the invariance of the Lyapunov dimension under diffeomorphism is demonstrated in the general case. This fact is used to obtain the analytic exact upper bound of the Lyapunov dimension of an attractor of the Shimizu–Morioka system.Entropy2015-07-22177Article10.3390/e17075101510151161099-43002015-07-22doi: 10.3390/e17075101Gennady LeonovTatyana AlexeevaNikolay Kuznetsovhttp://mdpi.com/1099-4300/17/7/5085
This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN), which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN) and Averaged One-Dependence Estimator (AODE) classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.Entropy2015-07-21177Article10.3390/e17075085508551001099-43002015-07-21doi: 10.3390/e17075085Aaron MeehanCassio de Camposhttp://mdpi.com/1099-4300/17/7/5063
We study the entanglement of a pure state of a composite quantum system consisting of several subsystems with d levels each. It can be described by the Rényi–Ingarden–Urbanik entropy Sq of a decomposition of the state in a product basis, minimized over all local unitary transformations. In the case q = 0, this quantity becomes a function of the rank of the tensor representing the state, while in the limit q → ∞, the entropy becomes related to the overlap with the closest separable state and the geometric measure of entanglement. For any bipartite system, the entropy S1 coincides with the standard entanglement entropy. We analyze the distribution of the minimal entropy for random states of three- and four-qubit systems. In the former case, the distribution of the three-tangle is studied and some of its moments are evaluated, while in the latter case, we analyze the distribution of the hyperdeterminant. The behavior of the maximum overlap of a three-qudit system with the closest separable state is also investigated in the asymptotic limit.Entropy2015-07-20177Article10.3390/e17075063506350841099-43002015-07-20doi: 10.3390/e17075063Marco EnríquezZbigniew PuchałaKarol Życzkowskihttp://mdpi.com/1099-4300/17/7/5047
Thermodynamic disequilibrium is a necessary situation in a system in which complex emergent structures are created and maintained. It is known that most of the chemical disequilibrium, a particular type of thermodynamic disequilibrium, in Earth’s atmosphere is a consequence of life. We have developed a thermochemical model for the Martian atmosphere to analyze the disequilibrium by chemical reactions calculating the entropy production. It follows from the comparison with the Earth atmosphere that the magnitude of the entropy produced by the recombination reaction forming O3 (O + O2 + CO2 ⥦ O3 + CO2) in the atmosphere of the Earth is larger than the entropy produced by the dominant set of chemical reactions considered for Mars, as a consequence of the low density and the poor variety of species of the Martian atmosphere. If disequilibrium is needed to create and maintain self-organizing structures in a system, we conclude that the current Martian atmosphere is unable to support large physico-chemical structures, such as those created on Earth.Entropy2015-07-20177Article10.3390/e17075047504750621099-43002015-07-20doi: 10.3390/e17075047Alfonso Delgado-BonalF. Martín-Torreshttp://mdpi.com/1099-4300/17/7/5043
In a recent PRL (2013, 111, 180604), we invoked the Shore and Johnson axioms which demonstrate that the least-biased way to infer probability distributions fpig from data is to maximize the Boltzmann-Gibbs entropy. We then showed which biases are introduced in models obtained by maximizing nonadditive entropies. A rebuttal of our work appears in entropy (2015, 17, 2853) and argues that the Shore and Johnson axioms are inapplicable to a wide class of complex systems. Here we highlight the errors in this reasoning.Entropy2015-07-17177Reply10.3390/e17075043504350461099-43002015-07-17doi: 10.3390/e17075043Steve PresséKingshuk GhoshJulian LeeKen Dillhttp://mdpi.com/1099-4300/17/7/5022
Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS). Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.Entropy2015-07-16177Article10.3390/e17075022502250421099-43002015-07-16doi: 10.3390/e17075022Lina BaoFeng PanJing LuJerry Draayerhttp://mdpi.com/1099-4300/17/7/5000
A previous definition of seismogenic zones is required to do a probabilistic seismic hazard analysis for areas of spread and low seismic activity. Traditional zoning methods are based on the available seismic catalog and the geological structures. It is admitted that thermal and resistant parameters of the crust provide better criteria for zoning. Nonetheless, the working out of the rheological profiles causes a great uncertainty. This has generated inconsistencies, as different zones have been proposed for the same area. A new method for seismogenic zoning by means of triclustering is proposed in this research. The main advantage is that it is solely based on seismic data. Almost no human decision is made, and therefore, the method is nearly non-biased. To assess its performance, the method has been applied to the Iberian Peninsula, which is characterized by the occurrence of small to moderate magnitude earthquakes. The catalog of the National Geographic Institute of Spain has been used. The output map is checked for validity with the geology. Moreover, a geographic information system has been used for two purposes. First, the obtained zones have been depicted within it. Second, the data have been used to calculate the seismic parameters (b-value, annual rate). Finally, the results have been compared to Kohonen’s self-organizing maps.Entropy2015-07-16177Article10.3390/e17075000500050211099-43002015-07-16doi: 10.3390/e17075000Francisco Martínez-ÁlvarezDavid Gutiérrez-AvilésAntonio Morales-EstebanJorge ReyesJosé Amaro-MelladoCristina Rubio-Escuderohttp://mdpi.com/1099-4300/17/7/4986
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n &amp;gt; m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.Entropy2015-07-15177Article10.3390/e17074986498649991099-43002015-07-15doi: 10.3390/e17074986Jayajit Sayak MukherjeeSusan Hodgehttp://mdpi.com/1099-4300/17/7/4974
In communication, the signal at the receiver end at time t is the signal from the transmitter side at time t −Τ (Τ ≥ 0 and it is the lag time) as the time lag of transmission. Therefore, lag synchronization (LS) is more accurate than complete synchronization to design communication scheme. Taking complex Lorenz system as an example, we design the LS controller according to error feedback. Using chaotic masking, we propose a communication scheme based on LS and independent component analysis (ICA). It is suitable to transmit multiple messages with all kinds of amplitudes and it has the ability of anti-noise. Numerical simulations verify the feasibility and effectiveness of the presented schemes.Entropy2015-07-15177Article10.3390/e17074974497449851099-43002015-07-15doi: 10.3390/e17074974Fangfang Zhanghttp://mdpi.com/1099-4300/17/7/4959
This study developed niche models for the native ranges of Oreochromis andersonii, O. mortimeri, and O. mossambicus, and assessed how much of their range is climatically suitable for the establishment of O. niloticus, and then reviewed the conservation implications for indigenous congenerics as a result of overlap with O. niloticus based on documented congeneric interactions. The predicted potential geographical range of O. niloticus reveals a broad climatic suitability over most of southern Africa and overlaps with all the endemic congenerics. This is of major conservation concern because six of the eight river systems predicted to be suitable for O. niloticus have already been invaded and now support established populations. Oreochromis niloticus has been implicated in reducing the abundance of indigenous species through competitive exclusion and hybridisation. Despite these well-documented adverse ecological effects, O. niloticus remains one of the most widely cultured and propagated fish species in aquaculture and stock enhancements in the southern Africa sub-region. Aquaculture is perceived as a means of protein security, poverty alleviation, and economic development and, as such, any future decisions on its introduction will be based on the trade-off between socio-economic benefits and potential adverse ecological effects.Entropy2015-07-14177Article10.3390/e17074959495949731099-43002015-07-14doi: 10.3390/e17074959Tsungai ZengeyaAnthony BoothChristian Chimimbahttp://mdpi.com/1099-4300/17/7/4940
Identity authentication is the process of verifying users’ validity. Unlike classical key-based authentications, which are built on noiseless channels, this paper introduces a general analysis and design framework for identity authentication over noisy channels. Specifically, the authentication scenarios of single time and multiple times are investigated. For each scenario, the lower bound on the opponent’s success probability is derived, and it is smaller than the classical identity authentication’s. In addition, it can remain the same, even if the secret key is reused. Remarkably, the Cartesian authentication code proves to be helpful for hiding the secret key to maximize the secrecy performance. Finally, we show a potential application of this authentication technique.Entropy2015-07-14177Article10.3390/e17074940494049581099-43002015-07-14doi: 10.3390/e17074940Fanfan ZhengZhiqing XiaoShidong ZhouJing WangLianfen Huanghttp://mdpi.com/1099-4300/17/7/4918
A set of Fisher information properties are presented in order to draw a parallel with similar properties of Shannon differential entropy. Already known properties are presented together with new ones, which include: (i) a generalization of mutual information for Fisher information; (ii) a new proof that Fisher information increases under conditioning; (iii) showing that Fisher information decreases in Markov chains; and (iv) bound estimation error using Fisher information. This last result is especially important, because it completes Fano’s inequality, i.e., a lower bound for estimation error, showing that Fisher information can be used to define an upper bound for this error. In this way, it is shown that Shannon’s differential entropy, which quantifies the behavior of the random variable, and the Fisher information, which quantifies the internal structure of the density function that defines the random variable, can be used to characterize the estimation error.Entropy2015-07-13177Article10.3390/e17074918491849391099-43002015-07-13doi: 10.3390/e17074918Pablo Zegershttp://mdpi.com/1099-4300/17/7/4891
Renewal processes are broadly used to model stochastic behavior consisting of isolated events separated by periods of quiescence, whose durations are specified by a given probability law. Here, we identify the minimal sufficient statistic for their prediction (the set of causal states), calculate the historical memory capacity required to store those states (statistical complexity), delineate what information is predictable (excess entropy), and decompose the entropy of a single measurement into that shared with the past, future, or both. The causal state equivalence relation defines a new subclass of renewal processes with a finite number of causal states despite having an unbounded interevent count distribution. We use the resulting formulae to analyze the output of the parametrized Simple Nonunifilar Source, generated by a simple two-state hidden Markov model, but with an infinite-state machine presentation. All in all, the results lay the groundwork for analyzing more complex processes with infinite statistical complexity and infinite excess entropy.Entropy2015-07-13177Article10.3390/e17074891489149171099-43002015-07-13doi: 10.3390/e17074891Sarah MarzenJames Crutchfieldhttp://mdpi.com/1099-4300/17/7/4863
In this article, we analyze the interrelationships among such notions as entropy, information, complexity, order and chaos and show using the theory of categories how to generalize the second law of thermodynamics as a law of increasing generalized entropy or a general law of complification. This law could be applied to any system with morphisms, including all of our universe and its subsystems. We discuss how such a general law and other laws of nature drive the evolution of the universe, including physicochemical and biological evolutions. In addition, we determine eliminating selection in physicochemical evolution as an extremely simplified prototype of natural selection. Laws of nature do not allow complexity and entropy to reach maximal values by generating structures. One could consider them as a kind of “breeder” of such selection.Entropy2015-07-10177Article10.3390/e17074863486348901099-43002015-07-10doi: 10.3390/e17074863George MikhailovskyAlexander Levichhttp://mdpi.com/1099-4300/17/7/4838
Joining independent quantum searches provides novel collective modes of quantum search (merging) by utilizing the algorithm’s underlying algebraic structure. If n quantum searches, each targeting a single item, join the domains of their classical oracle functions and sum their Hilbert spaces (merging), instead of acting independently (concatenation), then they achieve a reduction of the search complexity by factor O(√n).Entropy2015-07-10177Article10.3390/e17074838483848621099-43002015-07-10doi: 10.3390/e17074838Demosthenes EllinasChristos Konstandakishttp://mdpi.com/1099-4300/17/7/4809
This paper gives the quantitative relationship between prediction error and given past sample size in our research of sea level time series. The present result exhibits that the prediction error of sea level time series in terms of given past sample size follows decayed power functions, providing a quantitative guideline for the quality control of sea level prediction.Entropy2015-07-09177Article10.3390/e17074809480948371099-43002015-07-09doi: 10.3390/e17074809Ming LiYuanchun LiJianxing Lenghttp://mdpi.com/1099-4300/17/7/4786
In this work, the irreversible processes in light heating of Silicon (Si) and Germanium (Ge) thin films are examined. Each film is exposed to light irradiation with radiative and convective boundary conditions. Heat, electron and hole transport and generation-recombination processes of electron-hole pairs are studied in terms of a phenomenological model obtained from basic principles of irreversible thermodynamics. We present an analysis of the contributions to the entropy production in the stationary state due to the dissipative effects associated with electron and hole transport, generation-recombination of electron-hole pairs as well as heat transport. The most significant contribution to the entropy production comes from the interaction of light with the medium in both Si and Ge. This interaction includes two processes, namely, the generation of electron-hole pairs and the transferring of energy from the absorbed light to the lattice. In Si the following contribution in magnitude comes from the heat transport. In Ge all the remaining contributions to entropy production have nearly the same order of magnitude. The results are compared and explained addressing the differences in the magnitude of the thermodynamic forces, Onsager’s coefficients and transport properties of Si and Ge.Entropy2015-07-09177Article10.3390/e17074786478648081099-43002015-07-09doi: 10.3390/e17074786José Nájera-CarpioFederico VázquezAldo Figueroahttp://mdpi.com/1099-4300/17/7/4775
Image splicing is a common operation in image forgery. Different techniques of image splicing detection have been utilized to regain people’s trust. This study introduces a texture enhancement technique involving the use of fractional differential masks based on the Machado entropy. The masks slide over the tampered image, and each pixel of the tampered image is convolved with the fractional mask weight window on eight directions. Consequently, the fractional differential texture descriptors are extracted using the gray-level co-occurrence matrix for image splicing detection. The support vector machine is used as a classifier that distinguishes between authentic and spliced images. Results prove that the achieved improvements of the proposed algorithm are compatible with other splicing detection methods.Entropy2015-07-08177Article10.3390/e17074775477547851099-43002015-07-08doi: 10.3390/e17074775Rabha IbrahimZahra MoghaddasiHamid JalabRafidah Noorhttp://mdpi.com/1099-4300/17/7/4762
The problem of robust H∞ control is investigated for Markov jump systems with nonlinear noise intensity function and uncertain transition rates. A robust H∞ performance criterion is developed for the given systems for the first time. Based on the developed performance criterion, the desired H∞ state-feedback controller is also designed, which guarantees the robust H∞ performance of the closed-loop system. All the conditions are in terms of linear matrix inequalities (LMIs), and hence they can be readily solved by any LMI solver. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed methods.Entropy2015-07-06177Article10.3390/e17074762476247741099-43002015-07-06doi: 10.3390/e17074762Xiaonian WangYafeng Guohttp://mdpi.com/1099-4300/17/7/4744
The performance characteristics of an ejector-expansion refrigeration cycle (EEC) using R32 have been investigated in comparison with that using R134a. The coefficient of performance (COP), the exergy destruction, the exergy efficiency and the suction nozzle pressure drop (SNPD) are discussed. The results show that the application of an ejector instead of a throttle valve in R32 cycle decreases the cycle’s total exergy destruction by 8.84%–15.84% in comparison with the basic cycle (BC). The R32 EEC provides 5.22%–13.77% COP improvement and 5.13%–13.83% exergy efficiency improvement respectively over the BC for the given ranges of evaporating and condensing temperatures. There exists an optimum suction nozzle pressure drop (SNPD) which gives a maximum system COP and volumetric cooling capacity (VCC) under a specified condition. The value of the optimum SNPD mainly depends on the efficiencies of the ejector components, but is virtually independent of evaporating temperature and condensing temperature. In addition, the improvement of the component efficiency, especially the efficiencies of diffusion nozzle and the motive nozzle, can enhance the EEC performance.Entropy2015-07-06177Article10.3390/e17074744474447611099-43002015-07-06doi: 10.3390/e17074744Zhenying ZhangLirui TongLi ChangYanhua ChenXingguo Wanghttp://mdpi.com/1099-4300/17/7/4701
We study the asymptotic law of a network of interacting neurons when the number of neurons becomes infinite. Given a completely connected network of neurons in which the synaptic weights are Gaussian correlated random variables, we describe the asymptotic law of the network when the number of neurons goes to infinity. We introduce the process-level empirical measure of the trajectories of the solutions to the equations of the finite network of neurons and the averaged law (with respect to the synaptic weights) of the trajectories of the solutions to the equations of the network of neurons. The main result of this article is that the image law through the empirical measure satisfies a large deviation principle with a good rate function which is shown to have a unique global minimum. Our analysis of the rate function allows us also to characterize the limit measure as the image of a stationary Gaussian measure defined on a transformed set of trajectories.Entropy2015-07-06177Article10.3390/e17074701470147431099-43002015-07-06doi: 10.3390/e17074701Olivier FaugerasJames MacLaurinhttp://mdpi.com/1099-4300/17/7/4684
Surface tension and surface energy are closely related, although not identical concepts. Surface tension is a generalized force; unlike a conventional mechanical force, it is not applied to any particular body or point. Using this notion, we suggest a simple geometric interpretation of the Young, Wenzel, Cassie, Antonoff and Girifalco–Good equations for the equilibrium during wetting. This approach extends the traditional concept of Neumann’s triangle. Substances are presented as points, while tensions are vectors connecting the points, and the equations and inequalities of wetting equilibrium obtain simple geometric meaning with the surface roughness effect interpreted as stretching of corresponding vectors; surface heterogeneity is their linear combination, and contact angle hysteresis is rotation. We discuss energy dissipation mechanisms during wetting due to contact angle hysteresis, the superhydrophobicity and the possible entropic nature of the surface tension.Entropy2015-07-06177Article10.3390/e17074684468447001099-43002015-07-06doi: 10.3390/e17074684Michael NosonovskyRahul Ramachandranhttp://mdpi.com/1099-4300/17/7/4664
The classic principal components analysis (PCA), kernel PCA (KPCA) and linear discriminant analysis (LDA) feature extraction methods evaluate the importance of components according to their covariance contribution, not considering the entropy contribution, which is important supplementary information for the covariance. To further improve the covariance-based methods such as PCA (or KPCA), this paper firstly proposed an entropy matrix to load the uncertainty information of random variables similar to the covariance matrix loading the variation information in PCA. Then an entropy-difference matrix was used as a weighting matrix for transforming the original training images. This entropy-difference weighting (EW) matrix not only made good use of the local information of the training samples, contrast to the global method of PCA, but also considered the category information similar to LDA idea. Then the EW method was integrated with PCA (or KPCA), to form new feature extracting method. The new method was used for face recognition with the nearest neighbor classifier. The experimental results based on the ORL and Yale databases showed that the proposed method with proper threshold parameters reached higher recognition rates than the usual PCA (or KPCA) methods.Entropy2015-07-03177Article10.3390/e17074664466446831099-43002015-07-03doi: 10.3390/e17074664Shunfang WangPing Liuhttp://mdpi.com/1099-4300/17/7/4654
Existing experimental implementations of continuous-variable quantum key distribution require shot-noise limited operation, achieved with shot-noise limited lasers. However, loosening this requirement on the laser source would allow for cheaper, potentially integrated systems. Here, we implement a theoretically proposed prepare-and-measure continuous-variable protocol and experimentally demonstrate the robustness of it against preparation noise stemming for instance from technical laser noise. Provided that direct reconciliation techniques are used in the post-processing we show that for small distances large amounts of preparation noise can be tolerated in contrast to reverse reconciliation where the key rate quickly drops to zero. Our experiment thereby demonstrates that quantum key distribution with non-shot-noise limited laser diodes might be feasible.Entropy2015-07-03177Article10.3390/e17074654465446631099-43002015-07-03doi: 10.3390/e17074654Christian JacobsenTobias GehringUlrik Andersenhttp://mdpi.com/1099-4300/17/7/4644
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some desirable properties.Entropy2015-07-02177Article10.3390/e17074644464446531099-43002015-07-02doi: 10.3390/e17074644Virgil GriffithTracey Hohttp://mdpi.com/1099-4300/17/7/4627
Permutation entropy (PE) has been widely exploited to measure the complexity of the electroencephalogram (EEG), especially when complexity is linked to diagnostic information embedded in the EEG. Recently, the authors proposed a spatial-temporal analysis of the EEG recordings of absence epilepsy patients based on PE. The goal here is to improve the ability of PE in discriminating interictal states from ictal states in absence seizure EEG. For this purpose, a parametrical definition of permutation entropy is introduced here in the field of epileptic EEG analysis: the permutation Rényi entropy (PEr). PEr has been extensively tested against PE by tuning the involved parameters (order, delay time and alpha). The achieved results demonstrate that PEr outperforms PE, as there is a statistically-significant, wider gap between the PEr levels during the interictal states and PEr levels observed in the ictal states compared to PE. PEr also outperformed PE as the input to a classifier aimed at discriminating interictal from ictal states.Entropy2015-07-02177Article10.3390/e17074627462746431099-43002015-07-02doi: 10.3390/e17074627Nadia MammoneJonas Duun-HenriksenTroels KjaerFrancesco Morabitohttp://mdpi.com/1099-4300/17/7/4602
In regression analysis for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. In many situations, the assumptions underlying OLS are not fulfilled, and several other approaches have been proposed. However, most techniques address only part of the shortcomings of OLS. We here discuss a new and more general regression method, which we call geodesic least squares regression (GLS). The method is based on minimization of the Rao geodesic distance on a probabilistic manifold. For the case of a power law, we demonstrate the robustness of the method on synthetic data in the presence of significant uncertainty on both the data and the regression model. We then show good performance of the method in an application to a scaling law in magnetic confinement fusion.Entropy2015-07-01177Article10.3390/e17074602460246261099-43002015-07-01doi: 10.3390/e17074602Geert Verdoolaegehttp://mdpi.com/1099-4300/17/7/4582
In this paper, we study the classical Sumudu transform in fuzzy environment, referred to as the fuzzy Sumudu transform (FST). We also propose some results on the properties of the FST, such as linearity, preserving, fuzzy derivative, shifting and convolution theorem. In order to show the capability of the FST, we provide a detailed procedure to solve fuzzy differential equations (FDEs). A numerical example is provided to illustrate the usage of the FST.Entropy2015-07-01177Article10.3390/e17074582458246011099-43002015-07-01doi: 10.3390/e17074582Norazrizal RahmanMuhammad Ahmadhttp://mdpi.com/1099-4300/17/7/4563
In this work, we present the generalization of some thermodynamic properties of the black body radiation (BBR) towards an n-dimensional Euclidean space. For this case, the Planck function and the Stefan–Boltzmann law have already been given by Landsberg and de Vos and some adjustments by Menon and Agrawal. However, since then, not much more has been done on this subject, and we believe there are some relevant aspects yet to explore. In addition to the results previously found, we calculate the thermodynamic potentials, the efficiency of the Carnot engine, the law for adiabatic processes and the heat capacity at constant volume. There is a region at which an interesting behavior of the thermodynamic potentials arises: maxima and minima appear for the n—dimensional BBR system at very high temperatures and low dimensionality, suggesting a possible application to cosmology. Finally, we propose that an optimality criterion in a thermodynamic framework could be related to the 3—dimensional nature of the universe.Entropy2015-06-29177Article10.3390/e17074563456345811099-43002015-06-29doi: 10.3390/e17074563Julian Gonzalez-AyalaJennifer Perez-OregonRubén CorderoFernando Angulo-Brownhttp://mdpi.com/1099-4300/17/7/4547
We propose a method to improve the performance of two entanglement-based continuous-variable quantum key distribution protocols using noiseless linear amplifiers. The two entanglement-based schemes consist of an entanglement distribution protocol with an untrusted source and an entanglement swapping protocol with an untrusted relay. Simulation results show that the noiseless linear amplifiers can improve the performance of these two protocols, in terms of maximal transmission distances, when we consider small amounts of entanglement, as typical in realistic setups.Entropy2015-06-26177Article10.3390/e17074547454745621099-43002015-06-26doi: 10.3390/e17074547Yichen ZhangZhengyu LiChristian WeedbrookKevin MarshallStefano PirandolaSong YuHong Guohttp://mdpi.com/1099-4300/17/7/4533
At present, many cloud services are managed by using open source software, such as OpenStack and Eucalyptus, because of the unification management of data, cost reduction, quick delivery and work savings. The operation phase of cloud computing has a unique feature, such as the provisioning processes, the network-based operation and the diversity of data, because the operation phase of cloud computing changes depending on many external factors. We propose a jump diffusion model with two-dimensional Wiener processes in order to consider the interesting aspects of the network traffic and big data on cloud computing. In particular, we assess the stability of cloud software by using the sample paths obtained from the jump diffusion model with two-dimensional Wiener processes. Moreover, we discuss the optimal maintenance problem based on the proposed jump diffusion model. Furthermore, we analyze actual data to show numerical examples of dependability optimization based on the software maintenance cost considering big data on cloud computing.Entropy2015-06-26177Article10.3390/e17074533453345461099-43002015-06-26doi: 10.3390/e17074533Yoshinobu TamuraShigeru Yamadahttp://mdpi.com/1099-4300/17/7/4519
In this paper, we propose a new entropy-optimized bivariate empirical mode decomposition (BEMD)-based model for estimating portfolio value at risk (PVaR). It reveals and analyzes different components of the price fluctuation. These components are decomposed and distinguished by their different behavioral patterns and fluctuation range, by the BEMD model. The entropy theory has been introduced for the identification of the model parameters during the modeling process. The decomposed bivariate data components are calculated with the DCC-GARCH models. Empirical studies suggest that the proposed model outperforms the benchmark multivariate exponential weighted moving average (MEWMA) and DCC-GARCH model, in terms of conventional out-of-sample performance evaluation criteria for the model accuracy.Entropy2015-06-26177Article10.3390/e17074519451945321099-43002015-06-26doi: 10.3390/e17074519Yingchao ZouLean YuKaijian Hehttp://mdpi.com/1099-4300/17/7/4500
The present work analyzes the cognitive process that led Clausius towards the translation of the Second Law of Thermodynamics into mathematical expressions. We show that Clausius’ original formal expression of the Second Law was achieved by making extensive use of the concept of disgregation, a quantity which has subsequently disappeared from the thermodynamic language. Our analysis demonstrates that disgregation stands as a crucial logical step of such process and sheds light on the comprehension of such fundamental relation. The introduction of entropy—which occurred three years after the first formalization of the Second Law—was aimed at making the Second Law exploitable in practical contexts. The reasons for the disappearance of disgregation, as well as of other “pre-modern” quantities, from the thermodynamics language are discussed.Entropy2015-06-25177Article10.3390/e17074500450045181099-43002015-06-25doi: 10.3390/e17074500Emilio PellegrinoElena GhibaudiLuigi Cerrutihttp://mdpi.com/1099-4300/17/7/4485
A paper was published (Harsha and Subrahamanian Moosath, 2014) in which the authors claimed to have discovered an extension to Amari's \(\alpha\)-geometry through a general monotone embedding function. It will be pointed out here that this so-called \((F, G)\)-geometry (which includes \(F\)-geometry as a special case) is identical to Zhang's (2004) extension to the \(\alpha\)-geometry, where the name of the pair of monotone embedding functions \(\rho\) and \(\tau\) were used instead of \(F\) and \(H\) used in Harsha and Subrahamanian Moosath (2014). Their weighting function \(G\) for the Riemannian metric appears cosmetically due to a rewrite of the score function in log-representation as opposed to \((\rho, \tau)\)-representation in Zhang (2004). It is further shown here that the resulting metric and \(\alpha\)-connections obtained by Zhang (2004) through arbitrary monotone embeddings is a unique extension of the \(\alpha\)-geometric structure. As a special case, Naudts' (2004) \(\phi\)-logarithm embedding (using the so-called \(\log_\phi\) function) is recovered with the identification \(\rho=\phi, \, \tau=\log_\phi\), with \(\phi\)-exponential \(\exp_\phi\) given by the associated convex function linking the two representations.Entropy2015-06-25177Article10.3390/e17074485448544991099-43002015-06-25doi: 10.3390/e17074485Jun Zhanghttp://mdpi.com/1099-4300/17/6/4454
Vertical soil moisture profiles based on the principle of maximum entropy (POME) were validated using field and model data and applied to guide an irrigation cycle over a maize field in north central Alabama (USA). The results demonstrate that a simple two-constraint entropy model under the assumption of a uniform initial soil moisture distribution can simulate most soil moisture profiles that occur in the particular soil and climate regime that prevails in the study area. The results of the irrigation simulation demonstrated that the POME model produced a very efficient irrigation strategy with minimal losses (about 1.9% of total applied water). However, the results for finely-textured (silty clay) soils were problematic in that some plant stress did develop due to insufficient applied water. Soil moisture states in these soils fell to around 31% of available moisture content, but only on the last day of the drying side of the irrigation cycle. Overall, the POME approach showed promise as a general strategy to guide irrigation in humid environments, such as the Southeastern United States.Entropy2015-06-23176Article10.3390/e17064454445444841099-43002015-06-23doi: 10.3390/e17064454Vikalp MishraWalter EllenburgOsama Al-HamdanJosh BruceJames Cruisehttp://mdpi.com/1099-4300/17/6/4439
Using some investigations based on information theory, the model proposed by Keller and Segel was extended to the concept of fractional derivative using the derivative with fractional order without singular kernel recently proposed by Caputo and Fabrizio. We present in detail the existence of the coupled-solutions using the fixed-point theorem. A detailed analysis of the uniqueness of the coupled-solutions is also presented. Using an iterative approach, we derive special coupled-solutions of the modified system and we present some numerical simulations to see the effect of the fractional order.Entropy2015-06-23176Article10.3390/e17064439443944531099-43002015-06-23doi: 10.3390/e17064439Abdon AtanganaBadr Alkahtanihttp://mdpi.com/1099-4300/17/6/4413
The recent introduction of chronotaxic systems provides the means to describe nonautonomous systems with stable yet time-varying frequencies which are resistant to continuous external perturbations. This approach facilitates realistic characterization of the oscillations observed in living systems, including the observation of transitions in dynamics which were not considered previously. The novelty of this approach necessitated the development of a new set of methods for the inference of the dynamics and interactions present in chronotaxic systems. These methods, based on Bayesian inference and detrended fluctuation analysis, can identify chronotaxicity in phase dynamics extracted from a single time series. Here, they are applied to numerical examples and real experimental electroencephalogram (EEG) data. We also review the current methods, including their assumptions and limitations, elaborate on their implementation, and discuss future perspectives.Entropy2015-06-23176Article10.3390/e17064413441344381099-43002015-06-23doi: 10.3390/e17064413Gemma LancasterPhilip ClemsonYevhen SuprunenkoTomislav StankovskiAneta Stefanovskahttp://mdpi.com/1099-4300/17/6/4364
This work considers reasons for and implications of discarding the assumption of transitivity—the fundamental postulate in the utility theory of von Neumann and Morgenstern, the adiabatic accessibility principle of Caratheodory and most other theories related to preferences or competition. The examples of intransitivity are drawn from different fields, such as law, biology and economics. This work is intended as a common platform that allows us to discuss intransitivity in the context of different disciplines. The basic concepts and terms that are needed for consistent treatment of intransitivity in various applications are presented and analysed in a unified manner. The analysis points out conditions that necessitate appearance of intransitivity, such as multiplicity of preference criteria and imperfect (i.e., approximate) discrimination of different cases. The present work observes that with increasing presence and strength of intransitivity, thermodynamics gradually fades away leaving space for more general kinetic considerations. Intransitivity in competitive systems is linked to complex phenomena that would be difficult or impossible to explain on the basis of transitive assumptions. Human preferences that seem irrational from the perspective of the conventional utility theory, become perfectly logical in the intransitive and relativistic framework suggested here. The example of competitive simulations for the risk/benefit dilemma demonstrates the significance of intransitivity in cyclic behaviour and abrupt changes in the system. The evolutionary intransitivity parameter, which is introduced in the Appendix, is a general measure of intransitivity, which is particularly useful in evolving competitive systems.Entropy2015-06-19176Article10.3390/e17064364436444121099-43002015-06-19doi: 10.3390/e17064364Alexander Klimenkohttp://mdpi.com/1099-4300/17/6/4323
Information Geometry generalizes to infinite dimension by modeling the tangent space of the relevant manifold of probability densities with exponential Orlicz spaces. We review here several properties of the exponential manifold on a suitable set Ɛ of mutually absolutely continuous densities. We study in particular the fine properties of the Kullback-Liebler divergence in this context. We also show that this setting is well-suited for the study of the spatially homogeneous Boltzmann equation if Ɛ is a set of positive densities with finite relative entropy with respect to the Maxwell density. More precisely, we analyze the Boltzmann operator in the geometric setting from the point of its Maxwell’s weak form as a composition of elementary operations in the exponential manifold, namely tensor product, conditioning, marginalization and we prove in a geometric way the basic facts, i.e., the H-theorem. We also illustrate the robustness of our method by discussing, besides the Kullback-Leibler divergence, also the property of Hyvärinen divergence. This requires us to generalize our approach to Orlicz–Sobolev spaces to include derivatives.Entropy2015-06-19176Article10.3390/e17064323432343631099-43002015-06-19doi: 10.3390/e17064323Bertrand LodsGiovanni Pistonehttp://mdpi.com/1099-4300/17/6/4293
Concurrence provides us an effective approach to quantify entanglement, which is quite important in quantum information processing applications. In the paper, we mainly review some direct concurrence measurement protocols of the two-qubit optical or atomic system. We first introduce the concept of concurrence for a two-qubit system. Second, we explain the approaches of the concurrence measurement in both a linear and a nonlinear optical system. Third, we introduce some protocols for measuring the concurrence of the atomic entanglement system.Entropy2015-06-19176Review10.3390/e17064293429343221099-43002015-06-19doi: 10.3390/e17064293Lan ZhouYu-Bo Shenghttp://mdpi.com/1099-4300/17/6/4271
The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt) model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP) with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI) or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC) of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI) from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%), with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion. The combined approach––using either stability index or slope––highlights the importance of additional environmental variables in modeling landslide initiation.Entropy2015-06-19176Article10.3390/e17064271427142921099-43002015-06-19doi: 10.3390/e17064271Jerry DavisLeonhard Blesiushttp://mdpi.com/1099-4300/17/6/4255
In this study, we investigate some new analytical solutions to the (1 + 1)-dimensional nonlinear Dispersive Modified Benjamin–Bona–Mahony equation and the (2 + 1)-dimensional cubic Klein–Gordon equation by using the generalized Kudryashov method. After we submitted the general properties of the generalized Kudryashov method in Section 2, we applied this method to these problems to obtain some new analytical solutions, such as rational function solutions, exponential function solutions and hyperbolic function solutions in Section 3. Afterwards, we draw two- and three-dimensional surfaces of analytical solutions by using Wolfram Mathematica 9.Entropy2015-06-19176Article10.3390/e17064255425542701099-43002015-06-19doi: 10.3390/e17064255Haci BaskonusHasan Buluthttp://mdpi.com/1099-4300/17/6/4215
In this paper, we study Amari’s natural gradient flows of real functions defined on the densities belonging to an exponential family on a finite sample space. Our main example is the minimization of the expected value of a real function defined on the sample space. In such a case, the natural gradient flow converges to densities with reduced support that belong to the border of the exponential family. We have suggested in previous works to use the natural gradient evaluated in the mixture geometry. Here, we show that in some cases, the differential equation can be extended to a bigger domain in such a way that the densities at the border of the exponential family are actually internal points in the extended problem. The extension is based on the algebraic concept of an exponential variety. We study in full detail a toy example and obtain positive partial results in the important case of a binary sample space.Entropy2015-06-18176Article10.3390/e17064215421542541099-43002015-06-18doi: 10.3390/e17064215Luigi MalagòGiovanni Pistonehttp://mdpi.com/1099-4300/17/6/4202
Specifically setting a time delay fractional financial system as the study object, this paper proposes a single controller method to eliminate the impact of model uncertainty and external disturbances on the system. The proposed method is based on the stability theory of Lyapunov sliding-mode adaptive control and fractional-order linear systems. The controller can fit the system state within the sliding-mode surface so as to realize synchronization of fractional-order chaotic systems. Analysis results demonstrate that the proposed single integral, sliding-mode control method can control the time delay fractional power system to realize chaotic synchronization, with strong robustness to external disturbance. The controller is simple in structure. The proposed method was also validated by numerical simulation.Entropy2015-06-18176Article10.3390/e17064202420242141099-43002015-06-18doi: 10.3390/e17064202Haorui LiuJuan Yanghttp://mdpi.com/1099-4300/17/6/4173
This paper deals with the estimation of transfer entropy based on the k-nearest neighbors (k-NN) method. To this end, we first investigate the estimation of Shannon entropy involving a rectangular neighboring region, as suggested in already existing literature, and develop two kinds of entropy estimators. Then, applying the widely-used error cancellation approach to these entropy estimators, we propose two novel transfer entropy estimators, implying no extra computational cost compared to existing similar k-NN algorithms. Experimental simulations allow the comparison of the new estimators with the transfer entropy estimator available in free toolboxes, corresponding to two different extensions to the transfer entropy estimation of the Kraskov–Stögbauer–Grassberger (KSG) mutual information estimator and prove the effectiveness of these new estimators.Entropy2015-06-16176Article10.3390/e17064173417342011099-43002015-06-16doi: 10.3390/e17064173Jie ZhuJean-Jacques BellangerHuazhong ShuRégine Le Bouquin Jeannèshttp://mdpi.com/1099-4300/17/6/4155
We propose a wavelet-based approach to measure the Shannon entropy in the context of spatial point patterns. The method uses the fully anisotropic Morlet wavelet to estimate the energy distribution at different directions and scales. The spatial heterogeneity and complexity of spatial point patterns is then analyzed using the multiscale anisotropic wavelet entropy. The efficacy of the approach is shown through a simulation study. Finally, an application to the catalog of earthquake events in Chile is considered.Entropy2015-06-16176Article10.3390/e17064155415541721099-43002015-06-16doi: 10.3390/e17064155Orietta NicolisJorge Mateuhttp://mdpi.com/1099-4300/17/6/4134
The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB) classifier can construct at arbitrary points (values of k) along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB) classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI) showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB), tree augmented naive Bayes (TAN), Averaged one-dependence estimators (AODE), and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.Entropy2015-06-16176Article10.3390/e17064134413441541099-43002015-06-16doi: 10.3390/e17064134Limin WangHaoyu ZhaoMinghui SunYue Ninghttp://mdpi.com/1099-4300/17/6/4110
The main goal of this work is to determine a statistical non-equilibrium distribution function for the electron and holes in semiconductor heterostructures in steady-state conditions. Based on the postulates of local equilibrium, as well as on the integral form of the weighted Gyarmati’s variational principle in the force representation, using an alternative method, we have derived general expressions, which have the form of the Fermi–Dirac distribution function with four additional components. The physical interpretation of these components has been carried out in this paper. Some numerical results of a non-equilibrium distribution function for an electron in HgCdTe structures are also presented.Entropy2015-06-15176Article10.3390/e17064110411041331099-43002015-06-15doi: 10.3390/e17064110Krzysztof JόzwikowskaAlina JόzwikowskaMichał Nietopielhttp://mdpi.com/1099-4300/17/6/4083
Suppose we allow a system to fall freely from infinity to a point near (but not beyond) the horizon of a black hole. We note that in a sense the information in the system is already lost to an observer at infinity. Once the system is too close to the horizon it does not have enough energy to send its information back because the information carrying quanta would get redshifted to a point where they get confused with Hawking radiation. If one attempts to turn the infalling system around and bring it back to infinity for observation then it will experience Unruh radiation from the required acceleration. This radiation can excite the bits in the system carrying the information, thus reducing the fidelity of this information. We find the radius where the information is essentially lost in this way, noting that this radius depends on the energy gap (and coupling) of the system. We look for some universality by using the highly degenerate BPS ground states of a quantum gravity theory (string theory) as our information storage device. For such systems one finds that the critical distance to the horizon set by Unruh radiation is the geometric mean of the black hole radius and the radius of the extremal hole with quantum numbers of the BPS bound state. Overall, the results suggest that information in gravity theories should be regarded not as a quantity contained in a system, but in terms of how much of this information is accessible to another observer.Entropy2015-06-12176Article10.3390/e17064083408341091099-43002015-06-12doi: 10.3390/e17064083Samir Mathurhttp://mdpi.com/1099-4300/17/6/4064
Signal state preparation in quantum key distribution schemes can be realized using either an active or a passive source. Passive sources might be valuable in some scenarios; for instance, in those experimental setups operating at high transmission rates, since no externally driven element is required. Typical passive transmitters involve parametric down-conversion. More recently, it has been shown that phase-randomized coherent pulses also allow passive generation of decoy states and Bennett–Brassard 1984 (BB84) polarization signals, though the combination of both setups in a single passive source is cumbersome. In this paper, we present a complete passive transmitter that prepares decoy-state BB84 signals using coherent light. Our method employs sum-frequency generation together with linear optical components and classical photodetectors. In the asymptotic limit of an infinite long experiment, the resulting secret key rate (per pulse) is comparable to the one delivered by an active decoy-state BB84 setup with an infinite number of decoy settings.Entropy2015-06-12176Article10.3390/e17064064406440821099-43002015-06-12doi: 10.3390/e17064064Marcos CurtyMarc JofreValerio PruneriMorgan Mitchellhttp://mdpi.com/1099-4300/17/6/4040
Stress-strength reliability problems arise frequently in applied statistics and related fields. Often they involve two independent and possibly small samples of measurements on strength and breakdown pressures (stress). The goal of the researcher is to use the measurements to obtain inference on reliability, which is the probability that stress will exceed strength. This paper addresses the case where reliability is expressed in terms of an integral which has no closed form solution and where the number of observed values on stress and strength is small. We find that the Lagrange approach to estimating constrained likelihood, necessary for inference, often performs poorly. We introduce a penalized likelihood method and it appears to always work well. We use third order likelihood methods to partially offset the issue of small samples. The proposed method is applied to draw inferences on reliability in stress-strength problems with independent exponentiated exponential distributions. Simulation studies are carried out to assess the accuracy of the proposed method and to compare it with some standard asymptotic methods.Entropy2015-06-12176Article10.3390/e17064040404040631099-43002015-06-12doi: 10.3390/e17064040Barry SmithSteven WangAugustine WongXiaofeng Zhouhttp://mdpi.com/1099-4300/17/6/4028
The different kinds of boundary conditions for standard and fractional diffusion and advection diffusion equations are analyzed. Near the interface between two phases there arises a transition region which state differs from the state of contacting media owing to the different material particle interaction conditions. Particular emphasis has been placed on the conditions of nonperfect diffusive contact for the time-fractional advection diffusion equation. When the reduced characteristics of the interfacial region are equal to zero, the conditions of perfect contact are obtained as a particular case.Entropy2015-06-12176Article10.3390/e17064028402840391099-43002015-06-12doi: 10.3390/e17064028Yuriy Povstenkohttp://mdpi.com/1099-4300/17/6/3989
The main content of this review article is first to review the main inference tools using Bayes rule, the maximum entropy principle (MEP), information theory, relative entropy and the Kullback–Leibler (KL) divergence, Fisher information and its corresponding geometries. For each of these tools, the precise context of their use is described. The second part of the paper is focused on the ways these tools have been used in data, signal and image processing and in the inverse problems, which arise in different physical sciences and engineering applications. A few examples of the applications are described: entropy in independent components analysis (ICA) and in blind source separation, Fisher information in data model selection, different maximum entropy-based methods in time series spectral estimation and in linear inverse problems and, finally, the Bayesian inference for general inverse problems. Some original materials concerning the approximate Bayesian computation (ABC) and, in particular, the variational Bayesian approximation (VBA) methods are also presented. VBA is used for proposing an alternative Bayesian computational tool to the classical Markov chain Monte Carlo (MCMC) methods. We will also see that VBA englobes joint maximum a posteriori (MAP), as well as the different expectation-maximization (EM) algorithms as particular cases.Entropy2015-06-12176Article10.3390/e17063989398940271099-43002015-06-12doi: 10.3390/e17063989Ali Mohammad-Djafarihttp://mdpi.com/1099-4300/17/6/3963
The paper proposes a new non-parametric density estimator from region-censored observations with application in the context of population studies, where standard maximum likelihood is affected by over-fitting and non-uniqueness problems. It is a maximum entropy estimator that satisfies a set of constraints imposing a close fit to the empirical distributions associated with the set of censoring regions. The degree of relaxation of the data-fit constraints is chosen, such that the likelihood of the inferred model is maximal. In this manner, the estimator is able to overcome the singularity of the non-parametric maximum likelihood estimator and, at the same time, maintains a good fit to the observations. The behavior of the estimator is studied in a simulation, demonstrating its superior performance with respect to the non-parametric maximum likelihood and the importance of carefully choosing the degree of relaxation of the data-fit constraints. In particular, the predictive performance of the resulting estimator is better, which is important when the population analysis is done in the context of risk assessment. We also apply the estimator to real data in the context of the prevention of hyperbaric decompression sickness, where the available observations are formally equivalent to region-censored versions of the variables of interest, confirming that it is a superior alternative to non-parametric maximum likelihood in realistic situations.Entropy2015-06-11176Article10.3390/e17063963396339881099-43002015-06-11doi: 10.3390/e17063963Youssef BennaniLuc PronzatoMaria Rendashttp://mdpi.com/1099-4300/17/6/3947
To log in to a mobile social network service (SNS) server, users must enter their ID and password to get through the authentication process. At that time, if the user sets up the automatic login option on the app, a sort of security token is created on the server based on the user’s ID and password. This security token is called a credential. Because such credentials are convenient for users, they are utilized by most mobile SNS apps. However, the current state of credential management for the majority of Android SNS apps is very weak. This paper demonstrates the possibility of a credential cloning attack. Such attacks occur when an attacker extracts the credential from the victim’s smart device and inserts it into their own smart device. Then, without knowing the victim’s ID and password, the attacker can access the victim’s account. This type of attack gives access to various pieces of personal information without authorization. Thus, in this paper, we analyze the vulnerabilities of the main Android-based SNS apps to credential cloning attacks, and examine the potential leakage of personal information that may result. We then introduce effective countermeasures to resolve these problems.Entropy2015-06-10176Article10.3390/e17063947394739621099-43002015-06-10doi: 10.3390/e17063947Jongwon ChoiHaehyun ChoJeong Yihttp://mdpi.com/1099-4300/17/6/3913
Location-based services (LBSs) flood mobile phones nowadays, but their use poses an evident privacy risk. The locations accompanying the LBS queries can be exploited by the LBS provider to build the user profile of visited locations, which might disclose sensitive data, such as work or home locations. The classic concept of entropy is widely used to evaluate privacy in these scenarios, where the information is represented as a sequence of independent samples of categorized data. However, since the LBS queries might be sent very frequently, location profiles can be improved by adding temporal dependencies, thus becoming mobility profiles, where location samples are not independent anymore and might disclose the user’s mobility patterns. Since the time dimension is factored in, the classic entropy concept falls short of evaluating the real privacy level, which depends also on the time component. Therefore, we propose to extend the entropy-based privacy metric to the use of the entropy rate to evaluate mobility profiles. Then, two perturbative mechanisms are considered to preserve locations and mobility profiles under gradual utility constraints. We further use the proposed privacy metric and compare it to classic ones to evaluate both synthetic and real mobility profiles when the perturbative methods proposed are applied. The results prove the usefulness of the proposed metric for mobility profiles and the need for tailoring the perturbative methods to the features of mobility profiles in order to improve privacy without completely loosing utility.Entropy2015-06-10176Article10.3390/e17063913391339461099-43002015-06-10doi: 10.3390/e17063913Alicia Rodriguez-CarrionDavid Rebollo-MonederoJordi FornéCeleste CampoCarlos Garcia-RubioJavier Parra-ArnauSajal Dashttp://mdpi.com/1099-4300/17/6/3898
Based on geometric invariance properties, we derive an explicit prior distribution for the parameters of multivariate linear regression problems in the absence of further prior information. The problem is formulated as a rotationally-invariant distribution of \(L\)-dimensional hyperplanes in \(N\) dimensions, and the associated system of partial differential equations is solved. The derived prior distribution generalizes the already known special cases, e.g., 2D plane in three dimensions.Entropy2015-06-10176Article10.3390/e17063898389839121099-43002015-06-10doi: 10.3390/e17063898Udo von Toussainthttp://mdpi.com/1099-4300/17/6/3877
An encryption scheme for colour images using a spatiotemporal chaotic system is proposed. Initially, we use the R, G and B components of a colour plain-image to form a matrix. Then the matrix is permutated by using zigzag path scrambling. The resultant matrix is then passed through a substitution process. Finally, the ciphered colour image is obtained from the confused matrix. Theoretical analysis and experimental results indicate that the proposed scheme is both secure and practical, which make it suitable for encrypting colour images of any size.Entropy2015-06-09176Article10.3390/e17063877387738971099-43002015-06-09doi: 10.3390/e17063877Xing-Yuan WangYing-Qian ZhangXue-Mei Baohttp://mdpi.com/1099-4300/17/6/3857
In this paper, a novel self-creating disk-cell-splitting (SCDCS) algorithm is proposed for training the radial wavelet neural network (RWNN) model. Combining with the least square (LS) method which determines the linear weight coefficients, SCDCS can create neurons adaptively on a disk according to the distribution of input data and learning goals. As a result, a disk map is made for input data as well as a RWNN model with proper architecture and parameters can be decided for the recognition task. The proposed SCDCS-LS based RWNN model is employed for the recognition of license plate characters. Compared to the classical radial-basis-function (RBF) network with K-means clustering and LS, the proposed model can make a better recognition performance even with fewer neurons.Entropy2015-06-09176Article10.3390/e17063857385738761099-43002015-06-09doi: 10.3390/e17063857Rong ChengYanping BaiHongping HuXiuhui Tanhttp://mdpi.com/1099-4300/17/6/3838
The Fisher information constitutes a natural measure for the sensitivity of a probability distribution with respect to a set of parameters. An implementation of the stationarity principle for synaptic learning in terms of the Fisher information results in a Hebbian self-limiting learning rule for synaptic plasticity. In the present work, we study the dependence of the solutions to this rule in terms of the moments of the input probability distribution and find a preference for non-Gaussian directions, making it a suitable candidate for independent component analysis (ICA). We confirm in a numerical experiment that a neuron trained under these rules is able to find the independent components in the non-linear bars problem. The specific form of the plasticity rule depends on the transfer function used, becoming a simple cubic polynomial of the membrane potential for the case of the rescaled error function. The cubic learning rule is also an excellent approximation for other transfer functions, as the standard sigmoidal, and can be used to show analytically that the proposed plasticity rules are selective for directions in the space of presynaptic neural activities characterized by a negative excess kurtosis.Entropy2015-06-09176Article10.3390/e17063838383838561099-43002015-06-09doi: 10.3390/e17063838Rodrigo EchevesteSamuel EckmannClaudius Groshttp://mdpi.com/1099-4300/17/6/3806
Transport Layer Security (TLS) and its predecessor, SSL, are important cryptographic protocol suites on the Internet. They both implement public key certificates and rely on a group of trusted certificate authorities (i.e., CAs) for peer authentication. Unfortunately, the most recent research reveals that, if any one of the pre-trusted CAs is compromised, fake certificates can be issued to intercept the corresponding SSL/TLS connections. This security vulnerability leads to catastrophic impacts on SSL/TLS-based HTTPS, which is the underlying protocol to provide secure web services for e-commerce, e-mails, etc. To address this problem, we design an attribute dependency-based detection mechanism, called SSLight. SSLight can expose fake certificates by checking whether the certificates contain some attribute dependencies rarely occurring in legitimate samples. We conduct extensive experiments to evaluate SSLight and successfully confirm that SSLight can detect the vast majority of fake certificates issued from any trusted CAs if they are compromised. As a real-world example, we also implement SSLight as a Firefox add-on and examine its capability of exposing existent fake certificates from DigiNotar and Comodo, both of which have made a giant impact around the world.Entropy2015-06-08176Article10.3390/e17063806380638371099-43002015-06-08doi: 10.3390/e17063806Xiaojing GuXingsheng Guhttp://mdpi.com/1099-4300/17/6/3787
In this work, we show a general approach for inhomogeneous composite thermoelectric systems, and as an illustrative case, we consider a dual thermoelectric cooler. This composite cooler consists of two thermoelectric modules (TEMs) connected thermally in parallel and electrically in series. Each TEM has different thermoelectric (TE) properties, namely thermal conductance, electrical resistance and the Seebeck coefficient. The system is coupled by thermal conductances to heat reservoirs. The proposed approach consists of derivation of the dimensionless thermoelectric properties for the whole system. Thus, we obtain an equivalent figure of merit whose impact and meaning is discussed. We make use of dimensionless equations to study the impact of the thermal conductance matching on the cooling capacity and the coefficient of the performance of the system. The equivalent thermoelectric properties derived with our formalism include the external conductances and all intrinsic thermoelectric properties of each component of the system. Our proposed approach permits us changing the thermoelectric parameters of the TEMs and the working conditions of the composite system. Furthermore, our analysis shows the effect of the number of thermocouples on the system. These considerations are very useful for the design of thermoelectric composite systems. We reproduce the qualitative behavior of a commercial composite TEM connected electrically in series.Entropy2015-06-08176Article10.3390/e17063787378738051099-43002015-06-08doi: 10.3390/e17063787Cuautli Flores-NiñoMiguel Olivares-RoblesIgor Lobodahttp://mdpi.com/1099-4300/17/6/3766
As one of the most common types of graphical models, the Bayesian classifier has become an extremely popular approach to dealing with uncertainty and complexity. The scoring functions once proposed and widely used for a Bayesian network are not appropriate for a Bayesian classifier, in which class variable C is considered as a distinguished one. In this paper, we aim to clarify the working mechanism of Bayesian classifiers from the perspective of the chain rule of joint probability distribution. By establishing the mapping relationship between conditional probability distribution and mutual information, a new scoring function, Sum_MI, is derived and applied to evaluate the rationality of the Bayesian classifiers. To achieve global optimization and high dependence representation, the proposed learning algorithm, the flexible K-dependence Bayesian (FKDB) classifier, applies greedy search to extract more information from the K-dependence network structure. Meanwhile, during the learning procedure, the optimal attribute order is determined dynamically, rather than rigidly. In the experimental study, functional dependency analysis is used to improve model interpretability when the structure complexity is restricted.Entropy2015-06-08176Article10.3390/e17063766376637861099-43002015-06-08doi: 10.3390/e17063766Limin WangHaoyu Zhaohttp://mdpi.com/1099-4300/17/6/3752
In this paper, the leader-following consensus algorithm, which is accompanied with compensations related to neighboring agents’ delayed states, is constructed for second-order multi-agent systems with communication delay. Using frequency-domain analysis, delay-independent and delay-dependent consensus conditions are obtained for second-order agents respectively to converge to the dynamical leader’s states asymptotically. Simulation illustrates the correctness of the results.Entropy2015-06-08176Article10.3390/e17063752375237651099-43002015-06-08doi: 10.3390/e17063752Cheng-Lin LiuFei Liuhttp://mdpi.com/1099-4300/17/6/3738
We develop ideas proposed by Van der Straeten to extend maximum entropy principles to Markov chains. We focus in particular on the convergence of such estimates in order to explain how our approach makes possible the estimation of transition probabilities when only short samples are available, which opens the way to applications to non-stationary processes. The current work complements an earlier communication by providing numerical details, as well as a full derivation of the multi-constraint two-state and three-state maximum entropy transition matrices.Entropy2015-06-08176Article10.3390/e17063738373837511099-43002015-06-08doi: 10.3390/e17063738Gregor ChliamovitchAlexandre DupuisBastien Chopardhttp://mdpi.com/1099-4300/17/6/3724
Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position). Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i) the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii) the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another), which is quite familiar to traders, naturally emerges in our construction.Entropy2015-06-05176Article10.3390/e17063724372437371099-43002015-06-05doi: 10.3390/e17063724Donald GemanHélyette GemanNassim Talebhttp://mdpi.com/1099-4300/17/6/3710
Shannon entropies for networks have been widely introduced. However, entropies for weighted graphs have been little investigated. Inspired by the work due to Eagle et al., we introduce the concept of graph entropy for special weighted graphs. Furthermore, we prove extremal properties by using elementary methods of classes of weighted graphs, and in particular, the one due to Bollobás and Erdös, which is also called the Randi´c weight. As a result, we derived statements on dendrimers that have been proven useful for applications. Finally, some open problems are presented.Entropy2015-06-05176Article10.3390/e17063710371037231099-43002015-06-05doi: 10.3390/e17063710Zengqiang ChenMatthias DehmerFrank Emmert-StreibYongtang Shihttp://mdpi.com/1099-4300/17/6/3692
Since Advanced Encryption Standard (AES) in stream modes, such as counter (CTR), output feedback (OFB) and cipher feedback (CFB), can meet most industrial requirements, the range of applications for dedicated stream ciphers is decreasing. There are many attack results using algebraic properties and side channel information against stream ciphers for hardware applications. Al-Hinai et al. presented an algebraic attack approach to a family of irregularly clock-controlled linear feedback shift register systems: the stop and go generator, self-decimated generator and alternating step generator. Other clock-controlled systems, such as shrinking and cascade generators, are indeed vulnerable against side channel attacks. To overcome these threats, new clock-controlled systems were presented, e.g., the generalized alternating step generator, cascade jump-controlled generator and mutual clock-controlled generator. However, the algebraic attack could be applied directly on these new systems. In this paper, we propose a new clock-controlled generator: the switching generator, which has resistance to algebraic and side channel attacks. This generator also preserves both security properties and the efficiency of existing clock-controlled generators.Entropy2015-06-04176Article10.3390/e17063692369237091099-43002015-06-04doi: 10.3390/e17063692Jun ChoiDukjae MoonSeokhie HongJaechul Sunghttp://mdpi.com/1099-4300/17/6/3679
This paper develops a class of density regression models based on proportional hazards family, namely, Gamma transformation proportional hazard (Gt-PH) model . Exact inference for the regression parameters and hazard ratio is derived. These estimators enjoy some good properties such as unbiased estimation, which may not be shared by other inference methods such as maximum likelihood estimate (MLE). Generalised confidence interval and hypothesis testing for regression parameters are also provided. The method itself is easy to implement in practice. The regression method is also extended to Lasso-based variable selection.Entropy2015-06-04176Article10.3390/e17063679367936911099-43002015-06-04doi: 10.3390/e17063679Wei DangKeming Yuhttp://mdpi.com/1099-4300/17/6/3656
In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities.Entropy2015-06-03176Article10.3390/e17063656365636781099-43002015-06-03doi: 10.3390/e17063656José Arias-NicolásJacinto MartínFabrizio RuggeriAlfonso Suárez-Llorenshttp://mdpi.com/1099-4300/17/6/3645
The entropy production (inside the volume bounded by a photosphere) of main-sequence stars, subgiants, giants, and supergiants is calculated based on B–V photometry data. A non-linear inverse relationship of thermodynamic fluxes and forces as well as an almost constant specific (per volume) entropy production of main-sequence stars (for 95% of stars, this quantity lies within 0.5 to 2.2 of the corresponding solar magnitude) is found. The obtained results are discussed from the perspective of known extreme principles related to entropy production.Entropy2015-06-02176Article10.3390/e17063645364536551099-43002015-06-02doi: 10.3390/e17063645Leonid MartyushevSergey Zubarevhttp://mdpi.com/1099-4300/17/6/3631
This paper studies consensus and \(H_\infty\) consensus problems for heterogeneous multi-agent systems composed of first-order and second-order integrator agents. We first rewrite the multi-agent systems into the corresponding reduced-order systems based on the graph theory and the reduced-order transformation. Then, the linear matrix inequality approach is used to consider the consensus of heterogeneous multi-agent systems with time-varying delays in directed networks. As a result, sufficient conditions for consensus and \(H_\infty\) consensus of heterogeneous multi-agent systems in terms of linear matrix inequalities are established in the cases of fixed and switching topologies. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results.Entropy2015-06-02176Article10.3390/e17063631363136441099-43002015-06-02doi: 10.3390/e17063631Beibei WangYuangong Sunhttp://mdpi.com/1099-4300/17/6/3621
Two novel schemes are proposed to teleport an unknown two-level quantum state probabilistically when the sender and the receiver only have partial information about the quantum channel, respectively. This is distinct from the fact that either the sender or the receiver has entire information about the quantum channel in previous schemes for probabilistic teleportation. Theoretical analysis proves that these schemes are straightforward, efficient and cost-saving. The concrete realization procedures of our schemes are presented in detail, and the result shows that our proposals could extend the application range of probabilistic teleportation.Entropy2015-06-02176Article10.3390/e17063621362136301099-43002015-06-02doi: 10.3390/e17063621Desheng LiuZhiping HuangXiaojun Guohttp://mdpi.com/1099-4300/17/6/3595
This paper addresses the problem of measuring complexity from embedded attractors as a way to characterize changes in the dynamical behavior of different types of systems with a quasi-periodic behavior by observing their outputs. With the aim of measuring the stability of the trajectories of the attractor along time, this paper proposes three new estimations of entropy that are derived from a Markov model of the embedded attractor. The proposed estimators are compared with traditional nonparametric entropy measures, such as approximate entropy, sample entropy and fuzzy entropy, which only take into account the spatial dimension of the trajectory. The method proposes the use of an unsupervised algorithm to find the principal curve, which is considered as the “profile trajectory”, that will serve to adjust the Markov model. The new entropy measures are evaluated using three synthetic experiments and three datasets of physiological signals. In terms of consistency and discrimination capabilities, the results show that the proposed measures perform better than the other entropy measures used for comparison purposes.Entropy2015-06-02176Article10.3390/e17063595359536201099-43002015-06-02doi: 10.3390/e17063595Julián Arias-LondoñoJuan Godino-Llorentehttp://mdpi.com/1099-4300/17/6/3581
We construct a model of Brownian motion in Minkowski space. There are two aspects of the problem. The first is to define a sequence of stopping times associated with the Brownian “kicks” or impulses. The second is to define the dynamics of the particle along geodesics in between the Brownian kicks. When these two aspects are taken together, the Central Limit Theorem (CLT) leads to temperature dependent four dimensional distributions defined on Minkowski space, for distances and 4-velocities. In particular, our processes are characterized by two independent time variables defined with respect to the laboratory frame: a discrete one corresponding to the stopping times when the impulses take place and a continuous one corresponding to the geodesic motion in-between impulses. The subsequent distributions are solutions of a (covariant) pseudo-diffusion equation which involves derivatives with respect to both time variables, rather than solutions of the telegraph equation which has a single time variable. This approach simplifies some of the known problems in this context.Entropy2015-06-01176Article10.3390/e17063581358135941099-43002015-06-01doi: 10.3390/e17063581Paul O'HaraLamberto Rondonihttp://mdpi.com/1099-4300/17/6/3552
One of the major requirements of content based image retrieval (CBIR) systems is to ensure meaningful image retrieval against query images. The performance of these systems is severely degraded by the inclusion of image content which does not contain the objects of interest in an image during the image representation phase. Segmentation of the images is considered as a solution but there is no technique that can guarantee the object extraction in a robust way. Another limitation of the segmentation is that most of the image segmentation techniques are slow and their results are not reliable. To overcome these problems, a bandelet transform based image representation technique is presented in this paper, which reliably returns the information about the major objects found in an image. For image retrieval purposes, artificial neural networks (ANN) are applied and the performance of the system and achievement is evaluated on three standard data sets used in the domain of CBIR.Entropy2015-05-29176Article10.3390/e17063552355235801099-43002015-05-29doi: 10.3390/e17063552Rehan AshrafKhalid BashirAun IrtazaMuhammad Mahmoodhttp://mdpi.com/1099-4300/17/6/3518
Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be defined as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.Entropy2015-05-25176Article10.3390/e17063518351835511099-43002015-05-25doi: 10.3390/e17063518Michael HughesJohn McCarthyPaul BruillardJon MarshSamuel Wicklinehttp://mdpi.com/1099-4300/17/5/3501
Recently, a series of papers addressed the problem of decomposing the information of two random variables into shared information, unique information and synergistic information. Several measures were proposed, although still no consensus has been reached. Here, we compare these proposals with an older approach to define synergistic information based on the projections on exponential families containing only up to k-th order interactions. We show that these measures are not compatible with a decomposition into unique, shared and synergistic information if one requires that all terms are always non-negative (local positivity). We illustrate the difference between the two measures for multivariate Gaussians.Entropy2015-05-22175Article10.3390/e17053501350135171099-43002015-05-22doi: 10.3390/e17053501Eckehard OlbrichNils BertschingerJohannes Rauh