ComputationLatest open access articles published in Computation at http://www.mdpi.com/journal/computationhttp://www.mdpi.com/journal/computation
MDPIenCreative Commons Attribution (CC-BY)MDPIsupport@mdpi.com

Computation, Vol. 5, Pages 49: Challenges for Theory and Computationhttp://www.mdpi.com/2079-3197/5/4/49
The routinely made assumptions for simulating solid materials are briefly summarized, since they need to be critically assessed when new aspects become important, such as excited states, finite temperature, time-dependence, etc. The significantly higher computer power combined with improved experimental data open new areas for interdisciplinary research, for which new ideas and concepts are needed.Computation, Vol. 5, Pages 49: Challenges for Theory and Computation

The routinely made assumptions for simulating solid materials are briefly summarized, since they need to be critically assessed when new aspects become important, such as excited states, finite temperature, time-dependence, etc. The significantly higher computer power combined with improved experimental data open new areas for interdisciplinary research, for which new ideas and concepts are needed.

]]>Challenges for Theory and ComputationKarlheinz Schwarzdoi: 10.3390/computation5040049Computation2017-12-04Computation2017-12-0454Review4910.3390/computation5040049http://www.mdpi.com/2079-3197/5/4/49Computation, Vol. 5, Pages 48: A Holistic Scalable Implementation Approach of the Lattice Boltzmann Method for CPU/GPU Heterogeneous Clustershttp://www.mdpi.com/2079-3197/5/4/48
Heterogeneous clusters are a widely utilized class of supercomputers assembled from different types of computing devices, for instance CPUs and GPUs, providing a huge computational potential. Programming them in a scalable way exploiting the maximal performance introduces numerous challenges such as optimizations for different computing devices, dealing with multiple levels of parallelism, the application of different programming models, work distribution, and hiding of communication with computation. We utilize the lattice Boltzmann method for fluid flow as a representative of a scientific computing application and develop a holistic implementation for large-scale CPU/GPU heterogeneous clusters. We review and combine a set of best practices and techniques ranging from optimizations for the particular computing devices to the orchestration of tens of thousands of CPU cores and thousands of GPUs. Eventually, we come up with an implementation using all the available computational resources for the lattice Boltzmann method operators. Our approach shows excellent scalability behavior making it future-proof for heterogeneous clusters of the upcoming architectures on the exaFLOPS scale. Parallel efficiencies of more than 90 % are achieved leading to 2604.72 GLUPS utilizing 24,576 CPU cores and 2048 GPUs of the CPU/GPU heterogeneous cluster Piz Daint and computing more than 6.8 × 10 9 lattice cells.Computation, Vol. 5, Pages 48: A Holistic Scalable Implementation Approach of the Lattice Boltzmann Method for CPU/GPU Heterogeneous Clusters

Heterogeneous clusters are a widely utilized class of supercomputers assembled from different types of computing devices, for instance CPUs and GPUs, providing a huge computational potential. Programming them in a scalable way exploiting the maximal performance introduces numerous challenges such as optimizations for different computing devices, dealing with multiple levels of parallelism, the application of different programming models, work distribution, and hiding of communication with computation. We utilize the lattice Boltzmann method for fluid flow as a representative of a scientific computing application and develop a holistic implementation for large-scale CPU/GPU heterogeneous clusters. We review and combine a set of best practices and techniques ranging from optimizations for the particular computing devices to the orchestration of tens of thousands of CPU cores and thousands of GPUs. Eventually, we come up with an implementation using all the available computational resources for the lattice Boltzmann method operators. Our approach shows excellent scalability behavior making it future-proof for heterogeneous clusters of the upcoming architectures on the exaFLOPS scale. Parallel efficiencies of more than 90 % are achieved leading to 2604.72 GLUPS utilizing 24,576 CPU cores and 2048 GPUs of the CPU/GPU heterogeneous cluster Piz Daint and computing more than 6.8 × 10 9 lattice cells.

]]>A Holistic Scalable Implementation Approach of the Lattice Boltzmann Method for CPU/GPU Heterogeneous ClustersChristoph RiesingerArash BakhtiariMartin SchreiberPhilipp NeumannHans-Joachim Bungartzdoi: 10.3390/computation5040048Computation2017-11-30Computation2017-11-3054Article4810.3390/computation5040048http://www.mdpi.com/2079-3197/5/4/48Computation, Vol. 5, Pages 47: Nonlinear-Adaptive Mathematical System Identificationhttp://www.mdpi.com/2079-3197/5/4/47
By reversing paradigms that normally utilize mathematical models as the basis for nonlinear adaptive controllers, this article describes using the controller to serve as a novel computational approach for mathematical system identification. System identification usually begins with the dynamics, and then seeks to parameterize the mathematical model in an optimization relationship that produces estimates of the parameters that minimize a designated cost function. The proposed methodology uses a DC motor with a minimum-phase mathematical model controlled by a self-tuning regulator without model pole cancelation. The normal system identification process is briefly articulated by parameterizing the system for least squares estimation that includes an allowance for exponential forgetting to deal with time-varying plants. Next, towards the proposed approach, the Diophantine equation is derived for an indirect self-tuner where feedforward and feedback controls are both parameterized in terms of the motor’s math model. As the controller seeks to nullify tracking errors, the assumed plant parameters are adapted and quickly converge on the correct parameters of the motor’s math model. Next, a more challenging non-minimum phase system is investigated, and the earlier implemented technique is modified utilizing a direct self-tuner with an increased pole excess. The nominal method experiences control chattering (an undesirable characteristic that could potentially damage the motor during testing), while the increased pole excess eliminates the control chattering, yet maintains effective mathematical system identification. This novel approach permits algorithms normally used for control to instead be used effectively for mathematical system identification.Computation, Vol. 5, Pages 47: Nonlinear-Adaptive Mathematical System Identification

By reversing paradigms that normally utilize mathematical models as the basis for nonlinear adaptive controllers, this article describes using the controller to serve as a novel computational approach for mathematical system identification. System identification usually begins with the dynamics, and then seeks to parameterize the mathematical model in an optimization relationship that produces estimates of the parameters that minimize a designated cost function. The proposed methodology uses a DC motor with a minimum-phase mathematical model controlled by a self-tuning regulator without model pole cancelation. The normal system identification process is briefly articulated by parameterizing the system for least squares estimation that includes an allowance for exponential forgetting to deal with time-varying plants. Next, towards the proposed approach, the Diophantine equation is derived for an indirect self-tuner where feedforward and feedback controls are both parameterized in terms of the motor’s math model. As the controller seeks to nullify tracking errors, the assumed plant parameters are adapted and quickly converge on the correct parameters of the motor’s math model. Next, a more challenging non-minimum phase system is investigated, and the earlier implemented technique is modified utilizing a direct self-tuner with an increased pole excess. The nominal method experiences control chattering (an undesirable characteristic that could potentially damage the motor during testing), while the increased pole excess eliminates the control chattering, yet maintains effective mathematical system identification. This novel approach permits algorithms normally used for control to instead be used effectively for mathematical system identification.

]]>Nonlinear-Adaptive Mathematical System IdentificationTimothy Sandsdoi: 10.3390/computation5040047Computation2017-11-30Computation2017-11-3054Article4710.3390/computation5040047http://www.mdpi.com/2079-3197/5/4/47Computation, Vol. 5, Pages 46: Dynamic Data-Driven Modeling for Ex Vivo Data Analysis: Insights into Liver Transplantation and Pathobiologyhttp://www.mdpi.com/2079-3197/5/4/46
Extracorporeal organ perfusion, in which organs are preserved in an isolated, ex vivo environment over an extended time-span, is a concept that has led to the development of numerous alternative preservation protocols designed to better maintain organ viability prior to transplantation. These protocols offer researchers a novel opportunity to obtain extensive sampling of isolated organs, free from systemic influences. Data-driven computational modeling is a primary means of integrating the extensive and multivariate data obtained in this fashion. In this review, we focus on the application of dynamic data-driven computational modeling to liver pathophysiology and transplantation based on data obtained from ex vivo organ perfusion.Computation, Vol. 5, Pages 46: Dynamic Data-Driven Modeling for Ex Vivo Data Analysis: Insights into Liver Transplantation and Pathobiology

Extracorporeal organ perfusion, in which organs are preserved in an isolated, ex vivo environment over an extended time-span, is a concept that has led to the development of numerous alternative preservation protocols designed to better maintain organ viability prior to transplantation. These protocols offer researchers a novel opportunity to obtain extensive sampling of isolated organs, free from systemic influences. Data-driven computational modeling is a primary means of integrating the extensive and multivariate data obtained in this fashion. In this review, we focus on the application of dynamic data-driven computational modeling to liver pathophysiology and transplantation based on data obtained from ex vivo organ perfusion.

This paper is devoted to modelling tissue growth with a deformable cell model. Each cell represents a polygon with particles located at its vertices. Stretching, bending and pressure forces act on particles and determine their displacement. Pressure-dependent cell proliferation is considered. Various patterns of growing tissue are observed. An application of the model to tissue regeneration is illustrated. Approximate analytical models of tissue growth are developed.

]]>Deformable Cell Model of Tissue GrowthNikolai BessonovVitaly Volpertdoi: 10.3390/computation5040045Computation2017-10-30Computation2017-10-3054Article4510.3390/computation5040045http://www.mdpi.com/2079-3197/5/4/45Computation, Vol. 5, Pages 44: Multiresolution Modeling of Semidilute Polymer Solutions: Coarse-Graining Using Wavelet-Accelerated Monte Carlohttp://www.mdpi.com/2079-3197/5/4/44
We present a hierarchical coarse-graining framework for modeling semidilute polymer solutions, based on the wavelet-accelerated Monte Carlo (WAMC) method. This framework forms a hierarchy of resolutions to model polymers at length scales that cannot be reached via atomistic or even standard coarse-grained simulations. Previously, it was applied to simulations examining the structure of individual polymer chains in solution using up to four levels of coarse-graining (Ismail et al., J. Chem. Phys., 2005, 122, 234901 and Ismail et al., J. Chem. Phys., 2005, 122, 234902), recovering the correct scaling behavior in the coarse-grained representation. In the present work, we extend this method to the study of polymer solutions, deriving the bonded and non-bonded potentials between coarse-grained superatoms from the single chain statistics. A universal scaling function is obtained, which does not require recalculation of the potentials as the scale of the system is changed. To model semi-dilute polymer solutions, we assume the intermolecular potential between the coarse-grained beads to be equal to the non-bonded potential, which is a reasonable approximation in the case of semidilute systems. Thus, a minimal input of microscopic data is required for simulating the systems at the mesoscopic scale. We show that coarse-grained polymer solutions can reproduce results obtained from the more detailed atomistic system without a significant loss of accuracy.Computation, Vol. 5, Pages 44: Multiresolution Modeling of Semidilute Polymer Solutions: Coarse-Graining Using Wavelet-Accelerated Monte Carlo

We present a hierarchical coarse-graining framework for modeling semidilute polymer solutions, based on the wavelet-accelerated Monte Carlo (WAMC) method. This framework forms a hierarchy of resolutions to model polymers at length scales that cannot be reached via atomistic or even standard coarse-grained simulations. Previously, it was applied to simulations examining the structure of individual polymer chains in solution using up to four levels of coarse-graining (Ismail et al., J. Chem. Phys., 2005, 122, 234901 and Ismail et al., J. Chem. Phys., 2005, 122, 234902), recovering the correct scaling behavior in the coarse-grained representation. In the present work, we extend this method to the study of polymer solutions, deriving the bonded and non-bonded potentials between coarse-grained superatoms from the single chain statistics. A universal scaling function is obtained, which does not require recalculation of the potentials as the scale of the system is changed. To model semi-dilute polymer solutions, we assume the intermolecular potential between the coarse-grained beads to be equal to the non-bonded potential, which is a reasonable approximation in the case of semidilute systems. Thus, a minimal input of microscopic data is required for simulating the systems at the mesoscopic scale. We show that coarse-grained polymer solutions can reproduce results obtained from the more detailed atomistic system without a significant loss of accuracy.

]]>Multiresolution Modeling of Semidilute Polymer Solutions: Coarse-Graining Using Wavelet-Accelerated Monte CarloAnimesh AgarwalBrooks RabideauAhmed Ismaildoi: 10.3390/computation5040044Computation2017-09-28Computation2017-09-2854Article4410.3390/computation5040044http://www.mdpi.com/2079-3197/5/4/44Computation, Vol. 5, Pages 42: A Diagonally Updated Limited-Memory Quasi-Newton Method for the Weighted Density Approximationhttp://www.mdpi.com/2079-3197/5/4/42
We propose a limited-memory quasi-Newton method using the bad Broyden update and apply it to the nonlinear equations that must be solved to determine the effective Fermi momentum in the weighted density approximation for the exchange energy density functional. This algorithm has advantages for nonlinear systems of equations with diagonally dominant Jacobians, because it is easy to generalize the method to allow for periodic updates of the diagonal of the Jacobian. Systematic tests of the method for atoms show that one can determine the effective Fermi momentum at thousands of points in less than fifteen iterations.Computation, Vol. 5, Pages 42: A Diagonally Updated Limited-Memory Quasi-Newton Method for the Weighted Density Approximation

We propose a limited-memory quasi-Newton method using the bad Broyden update and apply it to the nonlinear equations that must be solved to determine the effective Fermi momentum in the weighted density approximation for the exchange energy density functional. This algorithm has advantages for nonlinear systems of equations with diagonally dominant Jacobians, because it is easy to generalize the method to allow for periodic updates of the diagonal of the Jacobian. Systematic tests of the method for atoms show that one can determine the effective Fermi momentum at thousands of points in less than fifteen iterations.

]]>A Diagonally Updated Limited-Memory Quasi-Newton Method for the Weighted Density ApproximationMatthew ChanRogelio Cuevas-SaavedraDebajit ChakrabortyPaul Ayersdoi: 10.3390/computation5040042Computation2017-09-26Computation2017-09-2654Article4210.3390/computation5040042http://www.mdpi.com/2079-3197/5/4/42Computation, Vol. 5, Pages 43: Self-Organizing Map for Characterizing Heterogeneous Nucleotide and Amino Acid Sequence Motifshttp://www.mdpi.com/2079-3197/5/4/43
A self-organizing map (SOM) is an artificial neural network algorithm that can learn from the training data consisting of objects expressed as vectors and perform non-hierarchical clustering to represent input vectors into discretized clusters, with vectors assigned to the same cluster sharing similar numeric or alphanumeric features. SOM has been used widely in transcriptomics to identify co-expressed genes as candidates for co-regulated genes. I envision SOM to have great potential in characterizing heterogeneous sequence motifs, and aim to illustrate this potential by a parallel presentation of SOM with a set of numerical vectors and a set of equal-length sequence motifs. While there are numerous biological applications of SOM involving numerical vectors, few studies have used SOM for heterogeneous sequence motif characterization. This paper is intended to encourage (1) researchers to study SOM in this new domain and (2) computer programmers to develop user-friendly motif-characterization SOM tools for biologists.Computation, Vol. 5, Pages 43: Self-Organizing Map for Characterizing Heterogeneous Nucleotide and Amino Acid Sequence Motifs

A self-organizing map (SOM) is an artificial neural network algorithm that can learn from the training data consisting of objects expressed as vectors and perform non-hierarchical clustering to represent input vectors into discretized clusters, with vectors assigned to the same cluster sharing similar numeric or alphanumeric features. SOM has been used widely in transcriptomics to identify co-expressed genes as candidates for co-regulated genes. I envision SOM to have great potential in characterizing heterogeneous sequence motifs, and aim to illustrate this potential by a parallel presentation of SOM with a set of numerical vectors and a set of equal-length sequence motifs. While there are numerous biological applications of SOM involving numerical vectors, few studies have used SOM for heterogeneous sequence motif characterization. This paper is intended to encourage (1) researchers to study SOM in this new domain and (2) computer programmers to develop user-friendly motif-characterization SOM tools for biologists.

]]>Self-Organizing Map for Characterizing Heterogeneous Nucleotide and Amino Acid Sequence MotifsXuhua Xiadoi: 10.3390/computation5040043Computation2017-09-26Computation2017-09-2654Review4310.3390/computation5040043http://www.mdpi.com/2079-3197/5/4/43Computation, Vol. 5, Pages 41: Modified Equation of Shock Wave Parametershttp://www.mdpi.com/2079-3197/5/3/41
Among the various blast load equations, the Kingery-Bulmash equation is applicable to both a free-air burst and a surface burst that enables calculations of the parameters of a pressure-time history curve. On the other hand, this equation is quite complicated. This paper proposes a modified equation that may replace the conventional Kingery-Bulmash equation. The proposed modified equation, which was constructed by performing curve-fitting of this equation, requires a brief calculation process with a simpler equation compared to the original equation. The modified equation is also applicable to both types of bursts and has the same calculable scaled distance range as the conventional equation. The calculation results obtained using the modified equation were similar to the results obtained from the original equation with a less than 1% difference.Computation, Vol. 5, Pages 41: Modified Equation of Shock Wave Parameters

Among the various blast load equations, the Kingery-Bulmash equation is applicable to both a free-air burst and a surface burst that enables calculations of the parameters of a pressure-time history curve. On the other hand, this equation is quite complicated. This paper proposes a modified equation that may replace the conventional Kingery-Bulmash equation. The proposed modified equation, which was constructed by performing curve-fitting of this equation, requires a brief calculation process with a simpler equation compared to the original equation. The modified equation is also applicable to both types of bursts and has the same calculable scaled distance range as the conventional equation. The calculation results obtained using the modified equation were similar to the results obtained from the original equation with a less than 1% difference.

]]>Modified Equation of Shock Wave ParametersDooJin JeonKiTae KimSangEul Handoi: 10.3390/computation5030041Computation2017-09-18Computation2017-09-1853Article4110.3390/computation5030041http://www.mdpi.com/2079-3197/5/3/41Computation, Vol. 5, Pages 40: Performance Comparison of Feed-Forward Neural Networks Trained with Different Learning Algorithms for Recommender Systemshttp://www.mdpi.com/2079-3197/5/3/40
Accuracy improvement is among the primary key research focuses in the area of recommender systems. Traditionally, recommender systems work on two sets of entities, Users and Items, to estimate a single rating that represents a user’s acceptance of an item. This technique was later extended to multi-criteria recommender systems that use an overall rating from multi-criteria ratings to estimate the degree of acceptance by users for items. The primary concern that is still open to the recommender systems community is to find suitable optimization algorithms that can explore the relationships between multiple ratings to compute an overall rating. One of the approaches for doing this is to assume that the overall rating as an aggregation of multiple criteria ratings. Given this assumption, this paper proposed using feed-forward neural networks to predict the overall rating. Five powerful training algorithms have been tested, and the results of their performance are analyzed and presented in this paper.Computation, Vol. 5, Pages 40: Performance Comparison of Feed-Forward Neural Networks Trained with Different Learning Algorithms for Recommender Systems

Accuracy improvement is among the primary key research focuses in the area of recommender systems. Traditionally, recommender systems work on two sets of entities, Users and Items, to estimate a single rating that represents a user’s acceptance of an item. This technique was later extended to multi-criteria recommender systems that use an overall rating from multi-criteria ratings to estimate the degree of acceptance by users for items. The primary concern that is still open to the recommender systems community is to find suitable optimization algorithms that can explore the relationships between multiple ratings to compute an overall rating. One of the approaches for doing this is to assume that the overall rating as an aggregation of multiple criteria ratings. Given this assumption, this paper proposed using feed-forward neural networks to predict the overall rating. Five powerful training algorithms have been tested, and the results of their performance are analyzed and presented in this paper.

]]>Performance Comparison of Feed-Forward Neural Networks Trained with Different Learning Algorithms for Recommender SystemsMohammed HassanMohamed Hamadadoi: 10.3390/computation5030040Computation2017-09-13Computation2017-09-1353Article4010.3390/computation5030040http://www.mdpi.com/2079-3197/5/3/40Computation, Vol. 5, Pages 39: Time-Dependent Density-Functional Theory and Excitons in Bulk and Two-Dimensional Semiconductorshttp://www.mdpi.com/2079-3197/5/3/39
In this work, we summarize the recent progress made in constructing time-dependent density-functional theory (TDDFT) exchange-correlation (XC) kernels capable to describe excitonic effects in semiconductors and apply these kernels in two important cases: a “classic” bulk semiconductor, GaAs, with weakly-bound excitons and a novel two-dimensional material, MoS2, with very strongly-bound excitonic states. Namely, after a brief review of the standard many-body semiconductor Bloch and Bethe-Salpether equation (SBE and BSE) and a combined TDDFT+BSE approaches, we proceed with details of the proposed pure TDDFT XC kernels for excitons. We analyze the reasons for successes and failures of these kernels in describing the excitons in bulk GaAs and monolayer MoS2, and conclude with a discussion of possible alternative kernels capable of accurately describing the bound electron-hole states in both bulk and two-dimensional materials.Computation, Vol. 5, Pages 39: Time-Dependent Density-Functional Theory and Excitons in Bulk and Two-Dimensional Semiconductors

In this work, we summarize the recent progress made in constructing time-dependent density-functional theory (TDDFT) exchange-correlation (XC) kernels capable to describe excitonic effects in semiconductors and apply these kernels in two important cases: a “classic” bulk semiconductor, GaAs, with weakly-bound excitons and a novel two-dimensional material, MoS2, with very strongly-bound excitonic states. Namely, after a brief review of the standard many-body semiconductor Bloch and Bethe-Salpether equation (SBE and BSE) and a combined TDDFT+BSE approaches, we proceed with details of the proposed pure TDDFT XC kernels for excitons. We analyze the reasons for successes and failures of these kernels in describing the excitons in bulk GaAs and monolayer MoS2, and conclude with a discussion of possible alternative kernels capable of accurately describing the bound electron-hole states in both bulk and two-dimensional materials.

]]>Time-Dependent Density-Functional Theory and Excitons in Bulk and Two-Dimensional SemiconductorsVolodymyr TurkowskiNaseem DinTalat Rahmandoi: 10.3390/computation5030039Computation2017-08-25Computation2017-08-2553Review3910.3390/computation5030039http://www.mdpi.com/2079-3197/5/3/39Computation, Vol. 5, Pages 38: CFD-PBM Approach with Different Inlet Locations for the Gas-Liquid Flow in a Laboratory-Scale Bubble Column with Activated Sludge/Waterhttp://www.mdpi.com/2079-3197/5/3/38
A novel computational fluid dynamics-population balance model (CFD-PBM) for the simulation of gas mixing in activated sludge (i.e., an opaque non-Newtonian liquid) in a bubble column is developed and described to solve the problem of measuring the hydrodynamic behavior of opaque non-Newtonian liquid-gas two-phase flow. We study the effects of the inlet position and liquid-phase properties (water/activated sludge) on various characteristics, such as liquid flow field, gas hold-up, liquid dynamic viscosity, and volume-averaged bubble diameter. As the inlet position changed, two symmetric vortices gradually became a single main vortex in the flow field in the bubble column. In the simulations, when water was in the liquid phase, the global gas hold-up was higher than when activated sludge was in the liquid phase in the bubble column, and a flow field that was dynamic with time was observed in the bubble column. Additionally, when activated sludge was used as the liquid phase, no periodic velocity changes were found. When the inlet position was varied, the non-Newtonian liquid phase had different peak values and distributions of (dynamic) liquid viscosity in the bubble column, which were related to the gas hold-up. The high gas hold-up zone corresponded to the low dynamic viscosity zone. Finally, when activated sludge was in the liquid phase, the volume-averaged bubble diameter was much larger than when water was in the liquid phase.Computation, Vol. 5, Pages 38: CFD-PBM Approach with Different Inlet Locations for the Gas-Liquid Flow in a Laboratory-Scale Bubble Column with Activated Sludge/Water

A novel computational fluid dynamics-population balance model (CFD-PBM) for the simulation of gas mixing in activated sludge (i.e., an opaque non-Newtonian liquid) in a bubble column is developed and described to solve the problem of measuring the hydrodynamic behavior of opaque non-Newtonian liquid-gas two-phase flow. We study the effects of the inlet position and liquid-phase properties (water/activated sludge) on various characteristics, such as liquid flow field, gas hold-up, liquid dynamic viscosity, and volume-averaged bubble diameter. As the inlet position changed, two symmetric vortices gradually became a single main vortex in the flow field in the bubble column. In the simulations, when water was in the liquid phase, the global gas hold-up was higher than when activated sludge was in the liquid phase in the bubble column, and a flow field that was dynamic with time was observed in the bubble column. Additionally, when activated sludge was used as the liquid phase, no periodic velocity changes were found. When the inlet position was varied, the non-Newtonian liquid phase had different peak values and distributions of (dynamic) liquid viscosity in the bubble column, which were related to the gas hold-up. The high gas hold-up zone corresponded to the low dynamic viscosity zone. Finally, when activated sludge was in the liquid phase, the volume-averaged bubble diameter was much larger than when water was in the liquid phase.

]]>CFD-PBM Approach with Different Inlet Locations for the Gas-Liquid Flow in a Laboratory-Scale Bubble Column with Activated Sludge/WaterLe WangQiang PanJie ChenShunsheng Yangdoi: 10.3390/computation5030038Computation2017-08-14Computation2017-08-1453Article3810.3390/computation5030038http://www.mdpi.com/2079-3197/5/3/38Computation, Vol. 5, Pages 37: A Non-Isothermal Chemical Lattice Boltzmann Model Incorporating Thermal Reaction Kinetics and Enthalpy Changeshttp://www.mdpi.com/2079-3197/5/3/37
The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines.Computation, Vol. 5, Pages 37: A Non-Isothermal Chemical Lattice Boltzmann Model Incorporating Thermal Reaction Kinetics and Enthalpy Changes

The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines.

]]>A Non-Isothermal Chemical Lattice Boltzmann Model Incorporating Thermal Reaction Kinetics and Enthalpy ChangesStuart Bartlettdoi: 10.3390/computation5030037Computation2017-08-09Computation2017-08-0953Article3710.3390/computation5030037http://www.mdpi.com/2079-3197/5/3/37Computation, Vol. 5, Pages 36: TFF (v.4.1): A Mathematica Notebook for the Calculation of One- and Two-Neutron Stripping and Pick-Up Nuclear Reactionshttp://www.mdpi.com/2079-3197/5/3/36
The program TFF calculates stripping single-particle form factors for one-neutron transfer in prior representation with appropriate perturbative treatment of recoil. Coupled equations are then integrated along a semiclassical trajectory to obtain one- and two-neutron transfer amplitudes and probabilities within first- and second-order perturbation theory. Total and differential cross-sections are then calculated by folding with a transmission function (obtained from a phenomenological imaginary absorption potential). The program description, user instructions and examples are discussed.Computation, Vol. 5, Pages 36: TFF (v.4.1): A Mathematica Notebook for the Calculation of One- and Two-Neutron Stripping and Pick-Up Nuclear Reactions

The program TFF calculates stripping single-particle form factors for one-neutron transfer in prior representation with appropriate perturbative treatment of recoil. Coupled equations are then integrated along a semiclassical trajectory to obtain one- and two-neutron transfer amplitudes and probabilities within first- and second-order perturbation theory. Total and differential cross-sections are then calculated by folding with a transmission function (obtained from a phenomenological imaginary absorption potential). The program description, user instructions and examples are discussed.

]]>TFF (v.4.1): A Mathematica Notebook for the Calculation of One- and Two-Neutron Stripping and Pick-Up Nuclear ReactionsLorenzo FortunatoIlyas InciJosé-Antonio LayAndrea Vitturidoi: 10.3390/computation5030036Computation2017-08-03Computation2017-08-0353Article3610.3390/computation5030036http://www.mdpi.com/2079-3197/5/3/36Computation, Vol. 5, Pages 35: Using an Interactive Lattice Boltzmann Solver in Fluid Mechanics Instructionhttp://www.mdpi.com/2079-3197/5/3/35
This article gives an overview of the diverse range of teaching applications that can be realized using an interactive lattice Boltzmann simulation tool in fluid mechanics instruction and outreach. In an inquiry-based learning framework, examples are given of learning scenarios that address instruction on scientific results, scientific methods or the scientific process at varying levels of student activity, from consuming to applying to researching. Interactive live demonstrations on portable hardware enable new and innovative teaching concepts for fluid mechanics, also for large audiences and in the early stages of the university education. Moreover, selected examples successfully demonstrate that the integration of high-fidelity CFD methods into fluid mechanics teaching facilitates high-quality student research work within reach of the current state of the art in the respective field of research.Computation, Vol. 5, Pages 35: Using an Interactive Lattice Boltzmann Solver in Fluid Mechanics Instruction

This article gives an overview of the diverse range of teaching applications that can be realized using an interactive lattice Boltzmann simulation tool in fluid mechanics instruction and outreach. In an inquiry-based learning framework, examples are given of learning scenarios that address instruction on scientific results, scientific methods or the scientific process at varying levels of student activity, from consuming to applying to researching. Interactive live demonstrations on portable hardware enable new and innovative teaching concepts for fluid mechanics, also for large audiences and in the early stages of the university education. Moreover, selected examples successfully demonstrate that the integration of high-fidelity CFD methods into fluid mechanics teaching facilitates high-quality student research work within reach of the current state of the art in the respective field of research.

]]>Using an Interactive Lattice Boltzmann Solver in Fluid Mechanics InstructionMirjam GlessmerChristian Janßendoi: 10.3390/computation5030035Computation2017-07-28Computation2017-07-2853Article3510.3390/computation5030035http://www.mdpi.com/2079-3197/5/3/35Computation, Vol. 5, Pages 34: Tensor-Based Semantically-Aware Topic Clustering of Biomedical Documentshttp://www.mdpi.com/2079-3197/5/3/34
Biomedicine is a pillar of the collective, scientific effort of human self-discovery, as well as a major source of humanistic data codified primarily in biomedical documents. Despite their rigid structure, maintaining and updating a considerably-sized collection of such documents is a task of overwhelming complexity mandating efficient information retrieval for the purpose of the integration of clustering schemes. The latter should work natively with inherently multidimensional data and higher order interdependencies. Additionally, past experience indicates that clustering should be semantically enhanced. Tensor algebra is the key to extending the current term-document model to more dimensions. In this article, an alternative keyword-term-document strategy, based on scientometric observations that keywords typically possess more expressive power than ordinary text terms, whose algorithmic cornerstones are third order tensors and MeSH ontological functions, is proposed. This strategy has been compared against a baseline using two different biomedical datasets, the TREC (Text REtrieval Conference) genomics benchmark and a large custom set of cognitive science articles from PubMed.Computation, Vol. 5, Pages 34: Tensor-Based Semantically-Aware Topic Clustering of Biomedical Documents

Biomedicine is a pillar of the collective, scientific effort of human self-discovery, as well as a major source of humanistic data codified primarily in biomedical documents. Despite their rigid structure, maintaining and updating a considerably-sized collection of such documents is a task of overwhelming complexity mandating efficient information retrieval for the purpose of the integration of clustering schemes. The latter should work natively with inherently multidimensional data and higher order interdependencies. Additionally, past experience indicates that clustering should be semantically enhanced. Tensor algebra is the key to extending the current term-document model to more dimensions. In this article, an alternative keyword-term-document strategy, based on scientometric observations that keywords typically possess more expressive power than ordinary text terms, whose algorithmic cornerstones are third order tensors and MeSH ontological functions, is proposed. This strategy has been compared against a baseline using two different biomedical datasets, the TREC (Text REtrieval Conference) genomics benchmark and a large custom set of cognitive science articles from PubMed.

]]>Tensor-Based Semantically-Aware Topic Clustering of Biomedical DocumentsGeorgios DrakopoulosAndreas KanavosIoannis KarydisSpyros SioutasAristidis G. Vrahatisdoi: 10.3390/computation5030034Computation2017-07-18Computation2017-07-1853Article3410.3390/computation5030034http://www.mdpi.com/2079-3197/5/3/34Computation, Vol. 5, Pages 33: A Discrete Approach to Meshless Lagrangian Solid Modelinghttp://www.mdpi.com/2079-3197/5/3/33
The author demonstrates a stable Lagrangian solid modeling method, tracking the interactions of solid mass particles rather than using a meshed grid. This numerical method avoids the problem of tensile instability often seen with smooth particle applied mechanics by having the solid particles apply stresses expected with Hooke’s law, as opposed to using a smoothing function for neighboring solid particles. This method has been tested successfully with a bar in tension, compression, and shear, as well as a disk compressed into a flat plate, and the numerical model consistently matched the analytical Hooke’s law as well as Hertz contact theory for all examples. The solid modeling numerical method was then built into a 2-D model of a pressure vessel, which was tested with liquid water particles under pressure and simulated with smoothed particle hydrodynamics. This simulation was stable, and demonstrated the feasibility of Lagrangian specification modeling for fluid–solid interactions.Computation, Vol. 5, Pages 33: A Discrete Approach to Meshless Lagrangian Solid Modeling

The author demonstrates a stable Lagrangian solid modeling method, tracking the interactions of solid mass particles rather than using a meshed grid. This numerical method avoids the problem of tensile instability often seen with smooth particle applied mechanics by having the solid particles apply stresses expected with Hooke’s law, as opposed to using a smoothing function for neighboring solid particles. This method has been tested successfully with a bar in tension, compression, and shear, as well as a disk compressed into a flat plate, and the numerical model consistently matched the analytical Hooke’s law as well as Hertz contact theory for all examples. The solid modeling numerical method was then built into a 2-D model of a pressure vessel, which was tested with liquid water particles under pressure and simulated with smoothed particle hydrodynamics. This simulation was stable, and demonstrated the feasibility of Lagrangian specification modeling for fluid–solid interactions.

]]>A Discrete Approach to Meshless Lagrangian Solid ModelingMatthew Markodoi: 10.3390/computation5030033Computation2017-07-17Computation2017-07-1753Article3310.3390/computation5030033http://www.mdpi.com/2079-3197/5/3/33Computation, Vol. 5, Pages 32: Anomalous Diffusion within the Transcriptome as a Bio-Inspired Computing Framework for Resiliencehttp://www.mdpi.com/2079-3197/5/3/32
Much of biology-inspired computer science is based on the Central Dogma, as implemented with genetic algorithms or evolutionary computation. That 60-year-old biological principle based on the genome, transcriptome and proteasome is becoming overshadowed by a new paradigm of complex ordered associations and connections between layers of biological entities, such as interactomes, metabolomics, etc. We define a new hierarchical concept as the “Connectosome”, and propose new venues of computational data structures based on a conceptual framework called “Grand Ensemble” which contains the Central Dogma as a subset. Connectedness and communication within and between living or biology-inspired systems comprise ensembles from which a physical computing system can be conceived. In this framework the delivery of messages is filtered by size and a simple and rapid semantic analysis of their content. This work aims to initiate discussion on the Grand Ensemble in network biology as a representation of a Persistent Turing Machine. This framework adding interaction and persistency to the classic Turing-machine model uses metrics based on resilience that has application to dynamic optimization problem solving in Genetic Programming.Computation, Vol. 5, Pages 32: Anomalous Diffusion within the Transcriptome as a Bio-Inspired Computing Framework for Resilience

Much of biology-inspired computer science is based on the Central Dogma, as implemented with genetic algorithms or evolutionary computation. That 60-year-old biological principle based on the genome, transcriptome and proteasome is becoming overshadowed by a new paradigm of complex ordered associations and connections between layers of biological entities, such as interactomes, metabolomics, etc. We define a new hierarchical concept as the “Connectosome”, and propose new venues of computational data structures based on a conceptual framework called “Grand Ensemble” which contains the Central Dogma as a subset. Connectedness and communication within and between living or biology-inspired systems comprise ensembles from which a physical computing system can be conceived. In this framework the delivery of messages is filtered by size and a simple and rapid semantic analysis of their content. This work aims to initiate discussion on the Grand Ensemble in network biology as a representation of a Persistent Turing Machine. This framework adding interaction and persistency to the classic Turing-machine model uses metrics based on resilience that has application to dynamic optimization problem solving in Genetic Programming.

]]>Anomalous Diffusion within the Transcriptome as a Bio-Inspired Computing Framework for ResilienceWilliam Seffensdoi: 10.3390/computation5030032Computation2017-07-04Computation2017-07-0453Article3210.3390/computation5030032http://www.mdpi.com/2079-3197/5/3/32Computation, Vol. 5, Pages 31: Artificial Immune Classifier Based on ELLipsoidal Regions (AICELL) †http://www.mdpi.com/2079-3197/5/2/31
Pattern classification is a central problem in machine learning, with a wide array of applications, and rule-based classifiers are one of the most prominent approaches. Among these classifiers, Incremental Rule Learning algorithms combine the advantages of classic Pittsburg and Michigan approaches, while, on the other hand, classifiers using fuzzy membership functions often result in systems with fewer rules and better generalization ability. To discover an optimal set of rules, learning classifier systems have always relied on bio-inspired models, mainly genetic algorithms. In this paper we propose a classification algorithm based on an efficient bio-inspired approach, Artificial Immune Networks. The proposed algorithm encodes the patterns as antigens, and evolves a set of antibodies, representing fuzzy classification rules of ellipsoidal surface, to cover the problem space. The innate immune mechanisms of affinity maturation and diversity preservation are modified and adapted to the classification context, resulting in a classifier that combines the advantages of both incremental rule learning and fuzzy classifier systems. The algorithm is compared to a number of state-of-the-art rule-based classifiers, as well as Support Vector Machines (SVM), producing very satisfying results, particularly in problems with large number of attributes and classes.Computation, Vol. 5, Pages 31: Artificial Immune Classifier Based on ELLipsoidal Regions (AICELL) †

Pattern classification is a central problem in machine learning, with a wide array of applications, and rule-based classifiers are one of the most prominent approaches. Among these classifiers, Incremental Rule Learning algorithms combine the advantages of classic Pittsburg and Michigan approaches, while, on the other hand, classifiers using fuzzy membership functions often result in systems with fewer rules and better generalization ability. To discover an optimal set of rules, learning classifier systems have always relied on bio-inspired models, mainly genetic algorithms. In this paper we propose a classification algorithm based on an efficient bio-inspired approach, Artificial Immune Networks. The proposed algorithm encodes the patterns as antigens, and evolves a set of antibodies, representing fuzzy classification rules of ellipsoidal surface, to cover the problem space. The innate immune mechanisms of affinity maturation and diversity preservation are modified and adapted to the classification context, resulting in a classifier that combines the advantages of both incremental rule learning and fuzzy classifier systems. The algorithm is compared to a number of state-of-the-art rule-based classifiers, as well as Support Vector Machines (SVM), producing very satisfying results, particularly in problems with large number of attributes and classes.

A systematic study of structural, electronic, vibrational properties of new ternary dicerium selenide dinitride, Ce2SeN2 and predicted compounds—Ce2SN2 and Ce2TeN2—is performed using first-principles calculations within Perdew–Burke–Ernzerhof functional with Hubbard correction. Our calculated results for structural parameters nicely agree to the experimental measurements. We predict that all ternary dicerium chalcogenide nitrides are thermodynamically stable. The predicted elastic constants and related mechanical properties demonstrate its profound mechanical stability as well. Moreover, our results show that Ce2XN2 are insulator materials. Trends of the structural parameters, electronic structures, and phonon dispersion are discussed in terms of the characteristics of the Ce (4f) states.

]]>Theoretical Prediction of Electronic Structures and Phonon Dispersion of Ce2XN2 (X = S, Se, and Te) TernaryMohammed Benali KanounSouraya Goumri-Saiddoi: 10.3390/computation5020029Computation2017-06-13Computation2017-06-1352Article2910.3390/computation5020029http://www.mdpi.com/2079-3197/5/2/29Computation, Vol. 5, Pages 30: Levy-Lieb-Based Monte Carlo Study of the Dimensionality Behaviour of the Electronic Kinetic Functionalhttp://www.mdpi.com/2079-3197/5/2/30
We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D), two dimensional (2D) and three dimensional (3D) cases. We focus on the determination of the correlation part of the kinetic functional by employing a Monte Carlo sampling technique of electrons in space based on an analytic derivation via the Levy-Lieb constrained search principle. Of particular interest is the question of the behaviour of the functional as one passes from 1D to 3D; according to the basic principles of Density Functional Theory (DFT) the form of the universal functional should be independent of the dimensionality. However, in practice the straightforward use of current approximate functionals in different dimensions is problematic. Here, we show that going from the 3D to the 2D case the functional form is consistent (concave function) but in 1D becomes convex; such a drastic difference is peculiar of 1D electron systems as it is for other quantities. Given the interesting behaviour of the functional, this study represents a basic first-principle approach to the problem and suggests further investigations using highly accurate (though expensive) many-electron computational techniques, such as Quantum Monte Carlo.Computation, Vol. 5, Pages 30: Levy-Lieb-Based Monte Carlo Study of the Dimensionality Behaviour of the Electronic Kinetic Functional

We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D), two dimensional (2D) and three dimensional (3D) cases. We focus on the determination of the correlation part of the kinetic functional by employing a Monte Carlo sampling technique of electrons in space based on an analytic derivation via the Levy-Lieb constrained search principle. Of particular interest is the question of the behaviour of the functional as one passes from 1D to 3D; according to the basic principles of Density Functional Theory (DFT) the form of the universal functional should be independent of the dimensionality. However, in practice the straightforward use of current approximate functionals in different dimensions is problematic. Here, we show that going from the 3D to the 2D case the functional form is consistent (concave function) but in 1D becomes convex; such a drastic difference is peculiar of 1D electron systems as it is for other quantities. Given the interesting behaviour of the functional, this study represents a basic first-principle approach to the problem and suggests further investigations using highly accurate (though expensive) many-electron computational techniques, such as Quantum Monte Carlo.

]]>Levy-Lieb-Based Monte Carlo Study of the Dimensionality Behaviour of the Electronic Kinetic FunctionalSeshaditya A.Luca GhiringhelliLuigi Delle Sitedoi: 10.3390/computation5020030Computation2017-06-10Computation2017-06-1052Article3010.3390/computation5020030http://www.mdpi.com/2079-3197/5/2/30Computation, Vol. 5, Pages 28: Geometric Derivation of the Stress Tensor of the Homogeneous Electron Gashttp://www.mdpi.com/2079-3197/5/2/28
The foundation of many approximations in time-dependent density functional theory (TDDFT) lies in the theory of the homogeneous electron gas. However, unlike the ground-state DFT, in which the exchange-correlation potential of the homogeneous electron gas is known exactly via the quantum Monte Carlo calculation, the time-dependent or frequency-dependent dynamical potential of the homogeneous electron gas has not been known exactly, due to the absence of a similar variational principle for excited states. In this work, we present a simple geometric derivation of the time-dependent dynamical exchange-correlation potential for the homogeneous system. With this derivation, the dynamical potential can be expressed in terms of the stress tensor, offering an alternative to calculate the bulk and shear moduli, two key input quantities in TDDFT.Computation, Vol. 5, Pages 28: Geometric Derivation of the Stress Tensor of the Homogeneous Electron Gas

The foundation of many approximations in time-dependent density functional theory (TDDFT) lies in the theory of the homogeneous electron gas. However, unlike the ground-state DFT, in which the exchange-correlation potential of the homogeneous electron gas is known exactly via the quantum Monte Carlo calculation, the time-dependent or frequency-dependent dynamical potential of the homogeneous electron gas has not been known exactly, due to the absence of a similar variational principle for excited states. In this work, we present a simple geometric derivation of the time-dependent dynamical exchange-correlation potential for the homogeneous system. With this derivation, the dynamical potential can be expressed in terms of the stress tensor, offering an alternative to calculate the bulk and shear moduli, two key input quantities in TDDFT.

]]>Geometric Derivation of the Stress Tensor of the Homogeneous Electron GasJianmin TaoGiovanni VignaleJian-Xin Zhudoi: 10.3390/computation5020028Computation2017-06-08Computation2017-06-0852Article2810.3390/computation5020028http://www.mdpi.com/2079-3197/5/2/28Computation, Vol. 5, Pages 27: Energetic Study of Clusters and Reaction Barrier Heights from Efficient Semilocal Density Functionalshttp://www.mdpi.com/2079-3197/5/2/27
The accurate first-principles prediction of the energetic properties of molecules and clusters from efficient semilocal density functionals is of broad interest. Here we study the performance of a non-empirical Tao-Mo (TM) density functional on binding energies and excitation energies of titanium dioxide and water clusters, as well as reaction barrier heights. To make a comparison, a combination of the TM exchange part with the TPSS (Tao–Perdew–Staroverov–Scuseria) correlation functional—called TMTPSS—is also included in this study. Our calculations show that the best binding energies of titanium dioxide are predicted by PBE0 (Perdew–Burke–Ernzerhof hybrid functional), TM, and TMTPSS with nearly the same accuracy, while B3LYP (Beck’s three-parameter exchange part with Lee-Yang-Parr correlation), TPSS, and PBE (Perdew–Burke–Ernzerhof) yield larger mean absolute errors. For excitation energies of titanium and water clusters, PBE0 and B3LYP are the most accurate functionals, outperforming the performance of semilocal functionals due to the nonlocality problem suffered by the latter. Nevertheless, TMTPSS and TM functionals are still good accurate semilocal methods, improving upon the commonly-used TPSS and PBE functionals. We also find that the best reaction barrier heights are predicted by PBE0 and B3LYP, thanks to the nonlocality incorporated into these two hybrid functionals, but TMTPSS and TM are obviously more accurate than SCAN (Strongly Constrained and Appropriately Normed), TPSS, and PBE, suggesting the good performance of TM and TMTPSS for physically different systems and properties.Computation, Vol. 5, Pages 27: Energetic Study of Clusters and Reaction Barrier Heights from Efficient Semilocal Density Functionals

The accurate first-principles prediction of the energetic properties of molecules and clusters from efficient semilocal density functionals is of broad interest. Here we study the performance of a non-empirical Tao-Mo (TM) density functional on binding energies and excitation energies of titanium dioxide and water clusters, as well as reaction barrier heights. To make a comparison, a combination of the TM exchange part with the TPSS (Tao–Perdew–Staroverov–Scuseria) correlation functional—called TMTPSS—is also included in this study. Our calculations show that the best binding energies of titanium dioxide are predicted by PBE0 (Perdew–Burke–Ernzerhof hybrid functional), TM, and TMTPSS with nearly the same accuracy, while B3LYP (Beck’s three-parameter exchange part with Lee-Yang-Parr correlation), TPSS, and PBE (Perdew–Burke–Ernzerhof) yield larger mean absolute errors. For excitation energies of titanium and water clusters, PBE0 and B3LYP are the most accurate functionals, outperforming the performance of semilocal functionals due to the nonlocality problem suffered by the latter. Nevertheless, TMTPSS and TM functionals are still good accurate semilocal methods, improving upon the commonly-used TPSS and PBE functionals. We also find that the best reaction barrier heights are predicted by PBE0 and B3LYP, thanks to the nonlocality incorporated into these two hybrid functionals, but TMTPSS and TM are obviously more accurate than SCAN (Strongly Constrained and Appropriately Normed), TPSS, and PBE, suggesting the good performance of TM and TMTPSS for physically different systems and properties.

]]>Energetic Study of Clusters and Reaction Barrier Heights from Efficient Semilocal Density FunctionalsGuocai TianYuxiang MoJianmin Taodoi: 10.3390/computation5020027Computation2017-06-03Computation2017-06-0352Article2710.3390/computation5020027http://www.mdpi.com/2079-3197/5/2/27Computation, Vol. 5, Pages 26: Deep Visual Attributes vs. Hand-Crafted Audio Features on Multidomain Speech Emotion Recognitionhttp://www.mdpi.com/2079-3197/5/2/26
Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may be recognized using several modalities such as analyzing facial expressions, speech, physiological parameters (e.g., electroencephalograms, electrocardiograms) etc. However, measuring of these modalities may be difficult, obtrusive or require expensive hardware. In that context, speech may be the best alternative modality in many practical applications. In this work we present an approach that uses a Convolutional Neural Network (CNN) functioning as a visual feature extractor and trained using raw speech information. In contrast to traditional machine learning approaches, CNNs are responsible for identifying the important features of the input thus, making the need of hand-crafted feature engineering optional in many tasks. In this paper no extra features are required other than the spectrogram representations and hand-crafted features were only extracted for validation purposes of our method. Moreover, it does not require any linguistic model and is not specific to any particular language. We compare the proposed approach using cross-language datasets and demonstrate that it is able to provide superior results vs. traditional ones that use hand-crafted features.Computation, Vol. 5, Pages 26: Deep Visual Attributes vs. Hand-Crafted Audio Features on Multidomain Speech Emotion Recognition

Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may be recognized using several modalities such as analyzing facial expressions, speech, physiological parameters (e.g., electroencephalograms, electrocardiograms) etc. However, measuring of these modalities may be difficult, obtrusive or require expensive hardware. In that context, speech may be the best alternative modality in many practical applications. In this work we present an approach that uses a Convolutional Neural Network (CNN) functioning as a visual feature extractor and trained using raw speech information. In contrast to traditional machine learning approaches, CNNs are responsible for identifying the important features of the input thus, making the need of hand-crafted feature engineering optional in many tasks. In this paper no extra features are required other than the spectrogram representations and hand-crafted features were only extracted for validation purposes of our method. Moreover, it does not require any linguistic model and is not specific to any particular language. We compare the proposed approach using cross-language datasets and demonstrate that it is able to provide superior results vs. traditional ones that use hand-crafted features.

]]>Deep Visual Attributes vs. Hand-Crafted Audio Features on Multidomain Speech Emotion RecognitionMichalis PapakostasEvaggelos SpyrouTheodoros GiannakopoulosGiorgos SiantikosDimitrios SgouropoulosPhivos MylonasFillia Makedondoi: 10.3390/computation5020026Computation2017-06-01Computation2017-06-0152Article2610.3390/computation5020026http://www.mdpi.com/2079-3197/5/2/26Computation, Vol. 5, Pages 25: Numerical Simulation of the Laminar Forced Convective Heat Transfer between Two Concentric Cylindershttp://www.mdpi.com/2079-3197/5/2/25
The dual reciprocity method (DRM) is a highly efficient numerical method of transforming domain integrals arising from the non-homogeneous term of the Poisson equation into equivalent boundary integrals. In this paper, the velocity and temperature fields of laminar forced heat convection in a concentric annular tube, with constant heat flux boundary conditions, have been studied using numerical simulations. The DRM has been used to solve the governing equation, which is expressed in the form of a Poisson equation. A test problem is employed to verify the DRM solutions with different boundary element discretizations and numbers of internal points. The results of the numerical simulations are discussed and compared with exact analytical solutions. Good agreement between the numerical results and exact solutions is evident, as the maximum relative errors are less than 5% to 6%, and the R2-values are greater than 0.999 in all cases. These results confirm the effectiveness and accuracy of the proposed numerical model, which is based on the DRM.Computation, Vol. 5, Pages 25: Numerical Simulation of the Laminar Forced Convective Heat Transfer between Two Concentric Cylinders

The dual reciprocity method (DRM) is a highly efficient numerical method of transforming domain integrals arising from the non-homogeneous term of the Poisson equation into equivalent boundary integrals. In this paper, the velocity and temperature fields of laminar forced heat convection in a concentric annular tube, with constant heat flux boundary conditions, have been studied using numerical simulations. The DRM has been used to solve the governing equation, which is expressed in the form of a Poisson equation. A test problem is employed to verify the DRM solutions with different boundary element discretizations and numbers of internal points. The results of the numerical simulations are discussed and compared with exact analytical solutions. Good agreement between the numerical results and exact solutions is evident, as the maximum relative errors are less than 5% to 6%, and the R2-values are greater than 0.999 in all cases. These results confirm the effectiveness and accuracy of the proposed numerical model, which is based on the DRM.

]]>Numerical Simulation of the Laminar Forced Convective Heat Transfer between Two Concentric CylindersIoan SarbuAnton Iosifdoi: 10.3390/computation5020025Computation2017-05-13Computation2017-05-1352Article2510.3390/computation5020025http://www.mdpi.com/2079-3197/5/2/25Computation, Vol. 5, Pages 24: Analyzing the Effect and Performance of Lossy Compression on Aeroacoustic Simulation of Gas Injectorhttp://www.mdpi.com/2079-3197/5/2/24
Computational fluid dynamic simulations involve large state data, leading to performance degradation due to data transfer times, while requiring large disk space. To alleviate the situation, an adaptive lossy compression algorithm has been developed, which is based on regions of interest. This algorithm uses prediction-based compression and exploits the temporal coherence between subsequent simulation frames. The difference between the actual value and the predicted value is adaptively quantized and encoded. The adaptation is in line with user requirements, that consist of the acceptable inaccuracy, the regions of interest and the required compression throughput. The data compression algorithm was evaluated with simulation data obtained by the discontinuous Galerkin spectral element method. We analyzed the performance, compression ratio and inaccuracy introduced by the lossy compression algorithm. The post processing analysis shows high compression ratios, with reasonable quantization errors.Computation, Vol. 5, Pages 24: Analyzing the Effect and Performance of Lossy Compression on Aeroacoustic Simulation of Gas Injector

Computational fluid dynamic simulations involve large state data, leading to performance degradation due to data transfer times, while requiring large disk space. To alleviate the situation, an adaptive lossy compression algorithm has been developed, which is based on regions of interest. This algorithm uses prediction-based compression and exploits the temporal coherence between subsequent simulation frames. The difference between the actual value and the predicted value is adaptively quantized and encoded. The adaptation is in line with user requirements, that consist of the acceptable inaccuracy, the regions of interest and the required compression throughput. The data compression algorithm was evaluated with simulation data obtained by the discontinuous Galerkin spectral element method. We analyzed the performance, compression ratio and inaccuracy introduced by the lossy compression algorithm. The post processing analysis shows high compression ratios, with reasonable quantization errors.

]]>Analyzing the Effect and Performance of Lossy Compression on Aeroacoustic Simulation of Gas InjectorSeyyed NajmabadiPhilipp OffenhäuserMoritz HamannGuhathakurta JajnabalkyaFabian HempertColin GlassSven Simondoi: 10.3390/computation5020024Computation2017-05-12Computation2017-05-1252Article2410.3390/computation5020024http://www.mdpi.com/2079-3197/5/2/24Computation, Vol. 5, Pages 23: Implicit Large Eddy Simulation of Flow in a Micro-Orifice with the Cumulant Lattice Boltzmann Methodhttp://www.mdpi.com/2079-3197/5/2/23
A detailed numerical study of turbulent flow through a micro-orifice is presented in this work. The flow becomes turbulent due to the orifice at the considered Reynolds numbers (∼ 10 4 ). The obtained flow rates are in good agreement with the experimental measurements. The discharge coefficient and the pressure loss are presented for two input pressures. The laminar stress and the generated turbulent stresses are investigated in detail, and the location of the vena contracta is quantitatively reproduced.Computation, Vol. 5, Pages 23: Implicit Large Eddy Simulation of Flow in a Micro-Orifice with the Cumulant Lattice Boltzmann Method

A detailed numerical study of turbulent flow through a micro-orifice is presented in this work. The flow becomes turbulent due to the orifice at the considered Reynolds numbers (∼ 10 4 ). The obtained flow rates are in good agreement with the experimental measurements. The discharge coefficient and the pressure loss are presented for two input pressures. The laminar stress and the generated turbulent stresses are investigated in detail, and the location of the vena contracta is quantitatively reproduced.

]]>Implicit Large Eddy Simulation of Flow in a Micro-Orifice with the Cumulant Lattice Boltzmann MethodEhsan Kian FarMartin GeierKonstantin KutscherManfred Krafczykdoi: 10.3390/computation5020023Computation2017-05-05Computation2017-05-0552Article2310.3390/computation5020023http://www.mdpi.com/2079-3197/5/2/23Computation, Vol. 5, Pages 22: Scatter Search Applied to the Inference of a Development Gene Networkhttp://www.mdpi.com/2079-3197/5/2/22
Efficient network inference is one of the challenges of current-day biology. Its application to the study of development has seen noteworthy success, yet a multicellular context, tissue growth, and cellular rearrangements impose additional computational costs and prohibit a wide application of current methods. Therefore, reducing computational cost and providing quick feedback at intermediate stages are desirable features for network inference. Here we propose a hybrid approach composed of two stages: exploration with scatter search and exploitation of intermediate solutions with low temperature simulated annealing. We test the approach on the well-understood process of early body plan development in flies, focusing on the gap gene network. We compare the hybrid approach to simulated annealing, a method of network inference with a proven track record. We find that scatter search performs well at exploring parameter space and that low temperature simulated annealing refines the intermediate results into excellent model fits. From this we conclude that for poorly-studied developmental systems, scatter search is a valuable tool for exploration and accelerates the elucidation of gene regulatory networks.Computation, Vol. 5, Pages 22: Scatter Search Applied to the Inference of a Development Gene Network

Efficient network inference is one of the challenges of current-day biology. Its application to the study of development has seen noteworthy success, yet a multicellular context, tissue growth, and cellular rearrangements impose additional computational costs and prohibit a wide application of current methods. Therefore, reducing computational cost and providing quick feedback at intermediate stages are desirable features for network inference. Here we propose a hybrid approach composed of two stages: exploration with scatter search and exploitation of intermediate solutions with low temperature simulated annealing. We test the approach on the well-understood process of early body plan development in flies, focusing on the gap gene network. We compare the hybrid approach to simulated annealing, a method of network inference with a proven track record. We find that scatter search performs well at exploring parameter space and that low temperature simulated annealing refines the intermediate results into excellent model fits. From this we conclude that for poorly-studied developmental systems, scatter search is a valuable tool for exploration and accelerates the elucidation of gene regulatory networks.

]]>Scatter Search Applied to the Inference of a Development Gene NetworkAmir AbdolDamjan Cicin-SainJaap KaandorpAnton Crombachdoi: 10.3390/computation5020022Computation2017-05-04Computation2017-05-0452Article2210.3390/computation5020022http://www.mdpi.com/2079-3197/5/2/22Computation, Vol. 5, Pages 21: An Information Technology Framework for the Development of an Embedded Computer System for the Remote and Non-Destructive Study of Sensitive Archaeology Siteshttp://www.mdpi.com/2079-3197/5/2/21
The paper proposes an information technology framework for the development of an embedded remote system for non-destructive observation and study of sensitive archaeological sites. The overall concept and motivation are described. The general hardware layout and software configuration are presented. The paper concentrates on the implementation of the following informational technology components: (a) a geographically unique identification scheme supporting a global key space for a key-value store; (b) a common method for octree modeling for spatial geometrical models of the archaeological artifacts, and abstract object representation in the global key space; (c) a broadcast of the archaeological information as an Extensible Markup Language (XML) stream over the Web for worldwide availability; and (d) a set of testing methods increasing the fault tolerance of the system. This framework can serve as a foundation for the development of a complete system for remote archaeological exploration of enclosed archaeological sites like buried churches, tombs, and caves. An archaeological site is opened once upon discovery, the embedded computer system is installed inside upon a robotic platform, equipped with sensors, cameras, and actuators, and the intact site is sealed again. Archaeological research is conducted on a multimedia data stream which is sent remotely from the system and conforms to necessary standards for digital archaeology.Computation, Vol. 5, Pages 21: An Information Technology Framework for the Development of an Embedded Computer System for the Remote and Non-Destructive Study of Sensitive Archaeology Sites

The paper proposes an information technology framework for the development of an embedded remote system for non-destructive observation and study of sensitive archaeological sites. The overall concept and motivation are described. The general hardware layout and software configuration are presented. The paper concentrates on the implementation of the following informational technology components: (a) a geographically unique identification scheme supporting a global key space for a key-value store; (b) a common method for octree modeling for spatial geometrical models of the archaeological artifacts, and abstract object representation in the global key space; (c) a broadcast of the archaeological information as an Extensible Markup Language (XML) stream over the Web for worldwide availability; and (d) a set of testing methods increasing the fault tolerance of the system. This framework can serve as a foundation for the development of a complete system for remote archaeological exploration of enclosed archaeological sites like buried churches, tombs, and caves. An archaeological site is opened once upon discovery, the embedded computer system is installed inside upon a robotic platform, equipped with sensors, cameras, and actuators, and the intact site is sealed again. Archaeological research is conducted on a multimedia data stream which is sent remotely from the system and conforms to necessary standards for digital archaeology.

]]>An Information Technology Framework for the Development of an Embedded Computer System for the Remote and Non-Destructive Study of Sensitive Archaeology SitesIliya GeorgievIvo Georgievdoi: 10.3390/computation5020021Computation2017-04-05Computation2017-04-0552Article2110.3390/computation5020021http://www.mdpi.com/2079-3197/5/2/21Computation, Vol. 5, Pages 20: Detecting Perturbed Subpathways towards Mouse Lung Regeneration Following H1N1 Influenza Infectionhttp://www.mdpi.com/2079-3197/5/2/20
It has already been established by the systems-level approaches that the future of predictive disease biomarkers will not be sketched by plain lists of genes or proteins or other biological entities but rather integrated entities that consider all underlying component relationships. Towards this orientation, early pathway-based approaches coupled expression data with whole pathway interaction topologies but it was the recent approaches that zoomed into subpathways (local areas of the entire biological pathway) that provided more targeted and context-specific candidate disease biomarkers. Here, we explore the application potential of PerSubs, a graph-based algorithm which identifies differentially activated disease-specific subpathways. PerSubs is applicable both for microarray and RNA-Seq data and utilizes the Kyoto Encyclopedia of Genes and Genomes (KEGG) database as reference for biological pathways. PerSubs operates in two stages: first, identifies differentially expressed genes (or uses any list of disease-related genes) and in second stage, treating each gene of the list as start point, it scans the pathway topology around to build meaningful subpathway topologies. Here, we apply PerSubs to investigate which pathways are perturbed towards mouse lung regeneration following H1N1 influenza infection.Computation, Vol. 5, Pages 20: Detecting Perturbed Subpathways towards Mouse Lung Regeneration Following H1N1 Influenza Infection

It has already been established by the systems-level approaches that the future of predictive disease biomarkers will not be sketched by plain lists of genes or proteins or other biological entities but rather integrated entities that consider all underlying component relationships. Towards this orientation, early pathway-based approaches coupled expression data with whole pathway interaction topologies but it was the recent approaches that zoomed into subpathways (local areas of the entire biological pathway) that provided more targeted and context-specific candidate disease biomarkers. Here, we explore the application potential of PerSubs, a graph-based algorithm which identifies differentially activated disease-specific subpathways. PerSubs is applicable both for microarray and RNA-Seq data and utilizes the Kyoto Encyclopedia of Genes and Genomes (KEGG) database as reference for biological pathways. PerSubs operates in two stages: first, identifies differentially expressed genes (or uses any list of disease-related genes) and in second stage, treating each gene of the list as start point, it scans the pathway topology around to build meaningful subpathway topologies. Here, we apply PerSubs to investigate which pathways are perturbed towards mouse lung regeneration following H1N1 influenza infection.

We present and analyze the Esoteric Twist algorithm for the Lattice Boltzmann Method. Esoteric Twist is a thread safe in-place streaming method that combines streaming and collision and requires only a single data set. Compared to other in-place streaming techniques, Esoteric Twist minimizes the memory footprint and the memory traffic when indirect addressing is used. Esoteric Twist is particularly suitable for the implementation of the Lattice Boltzmann Method on Graphic Processing Units.

]]>Esoteric Twist: An Efficient in-Place Streaming Algorithmus for the Lattice Boltzmann Method on Massively Parallel HardwareMartin GeierMartin Schönherrdoi: 10.3390/computation5020019Computation2017-03-23Computation2017-03-2352Article1910.3390/computation5020019http://www.mdpi.com/2079-3197/5/2/19Computation, Vol. 5, Pages 18: An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent Channelshttp://www.mdpi.com/2079-3197/5/1/18
The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using the derived mathematical expression and the presented computational tool, we present the corresponding numerical results, using common parameter values for realistic terrestrial free space optical communication systems.Computation, Vol. 5, Pages 18: An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent Channels

The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using the derived mathematical expression and the presented computational tool, we present the corresponding numerical results, using common parameter values for realistic terrestrial free space optical communication systems.

]]>An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent ChannelsTheodore D. KatsilierisGeorge P. LatsasHector E. NistazakisGeorge S. Tombrasdoi: 10.3390/computation5010018Computation2017-03-18Computation2017-03-1851Article1810.3390/computation5010018http://www.mdpi.com/2079-3197/5/1/18Computation, Vol. 5, Pages 17: Evaluation of Soil-Structure Interaction on the Seismic Response of Liquid Storage Tanks under Earthquake Ground Motionshttp://www.mdpi.com/2079-3197/5/1/17
Soil-structure interaction (SSI) could affect the seismic response of structures. Since liquid storage tanks are vital structures and must continue their operation under severe earthquakes, their seismic behavior should be studied. Accordingly, the seismic response of two types of steel liquid storage tanks (namely, broad and slender, with aspect ratios of height to radius equal to 0.6 and 1.85) founded on half-space soil is scrutinized under different earthquake ground motions. For a better comparison, the six considered ground motions are classified, based on their pulse-like characteristics, into two groups, named far and near fault ground motions. To model the liquid storage tanks, the simplified mass-spring model is used and the liquid is modeled as two lumped masses known as sloshing and impulsive, and the interaction of fluid and structure is considered using two coupled springs and dashpots. The SSI effect, also, is considered using a coupled spring and dashpot. Additionally, four types of soils are used to consider a wide variety of soil properties. To this end, after deriving the equations of motion, MATLAB programming is employed to obtain the time history responses. Results show that although the SSI effect leads to a decrease in the impulsive displacement, overturning moment, and normalized base shear, the sloshing (or convective) displacement is not affected by such effects due to its long period.Computation, Vol. 5, Pages 17: Evaluation of Soil-Structure Interaction on the Seismic Response of Liquid Storage Tanks under Earthquake Ground Motions

Soil-structure interaction (SSI) could affect the seismic response of structures. Since liquid storage tanks are vital structures and must continue their operation under severe earthquakes, their seismic behavior should be studied. Accordingly, the seismic response of two types of steel liquid storage tanks (namely, broad and slender, with aspect ratios of height to radius equal to 0.6 and 1.85) founded on half-space soil is scrutinized under different earthquake ground motions. For a better comparison, the six considered ground motions are classified, based on their pulse-like characteristics, into two groups, named far and near fault ground motions. To model the liquid storage tanks, the simplified mass-spring model is used and the liquid is modeled as two lumped masses known as sloshing and impulsive, and the interaction of fluid and structure is considered using two coupled springs and dashpots. The SSI effect, also, is considered using a coupled spring and dashpot. Additionally, four types of soils are used to consider a wide variety of soil properties. To this end, after deriving the equations of motion, MATLAB programming is employed to obtain the time history responses. Results show that although the SSI effect leads to a decrease in the impulsive displacement, overturning moment, and normalized base shear, the sloshing (or convective) displacement is not affected by such effects due to its long period.

]]>Evaluation of Soil-Structure Interaction on the Seismic Response of Liquid Storage Tanks under Earthquake Ground MotionsMostafa FarajianMohammad KhodakaramiDenise-Penelope Kontonidoi: 10.3390/computation5010017Computation2017-03-12Computation2017-03-1251Article1710.3390/computation5010017http://www.mdpi.com/2079-3197/5/1/17Computation, Vol. 5, Pages 16: A Hybrid Computation Model to Describe the Progression of Multiple Myeloma and Its Intra-Clonal Heterogeneityhttp://www.mdpi.com/2079-3197/5/1/16
Multiplemyeloma(MM)isageneticallycomplexhematologicalcancerthatischaracterized by proliferation of malignant plasma cells in the bone marrow. MM evolves from the clonal premalignant disorder monoclonal gammopathy of unknown signiﬁcance (MGUS) by sequential genetic changes involving many different genes, resulting in dysregulated growth of multiple clones of plasma cells. The migration, survival, and proliferation of these clones require the direct and indirect interactions with the non-hematopoietic cells of the bone marrow. We develop a hybrid discrete-continuous model of MM development from the MGUS stage. The discrete aspect of the modelisobservedatthecellularlevel: cellsarerepresentedasindividualobjectswhichmove,interact, divide, and die by apoptosis. Each of these actions is regulated by intracellular and extracellular processes as described by continuous models. The hybrid model consists of the following submodels that have been simpliﬁed from the much more complex state of evolving MM: cell motion due to chemotaxis, intracellular regulation of plasma cells, extracellular regulation in the bone marrow, and acquisition of mutations upon cell division. By extending a previous, simpler model in which the extracellular matrix was considered to be uniformly distributed, the new hybrid model provides a more accurate description in which cytokines are produced by the marrow microenvironment and consumed by the myeloma cells. The complex multiple genetic changes in MM cells and the numerous cell-cell and cytokine-mediated interactions between myeloma cells and their marrow microenviroment are simpliﬁed in the model such that four related but evolving MM clones can be studied as they compete for dominance in the setting of intraclonal heterogeneity.Computation, Vol. 5, Pages 16: A Hybrid Computation Model to Describe the Progression of Multiple Myeloma and Its Intra-Clonal Heterogeneity

Multiplemyeloma(MM)isageneticallycomplexhematologicalcancerthatischaracterized by proliferation of malignant plasma cells in the bone marrow. MM evolves from the clonal premalignant disorder monoclonal gammopathy of unknown signiﬁcance (MGUS) by sequential genetic changes involving many different genes, resulting in dysregulated growth of multiple clones of plasma cells. The migration, survival, and proliferation of these clones require the direct and indirect interactions with the non-hematopoietic cells of the bone marrow. We develop a hybrid discrete-continuous model of MM development from the MGUS stage. The discrete aspect of the modelisobservedatthecellularlevel: cellsarerepresentedasindividualobjectswhichmove,interact, divide, and die by apoptosis. Each of these actions is regulated by intracellular and extracellular processes as described by continuous models. The hybrid model consists of the following submodels that have been simpliﬁed from the much more complex state of evolving MM: cell motion due to chemotaxis, intracellular regulation of plasma cells, extracellular regulation in the bone marrow, and acquisition of mutations upon cell division. By extending a previous, simpler model in which the extracellular matrix was considered to be uniformly distributed, the new hybrid model provides a more accurate description in which cytokines are produced by the marrow microenvironment and consumed by the myeloma cells. The complex multiple genetic changes in MM cells and the numerous cell-cell and cytokine-mediated interactions between myeloma cells and their marrow microenviroment are simpliﬁed in the model such that four related but evolving MM clones can be studied as they compete for dominance in the setting of intraclonal heterogeneity.

]]>A Hybrid Computation Model to Describe the Progression of Multiple Myeloma and Its Intra-Clonal HeterogeneityAnass BouchnitaFatima-Ezzahra BelmaatiRajae AboulaichMark KouryVitaly Volpertdoi: 10.3390/computation5010016Computation2017-03-10Computation2017-03-1051Article1610.3390/computation5010016http://www.mdpi.com/2079-3197/5/1/16Computation, Vol. 5, Pages 14: Simpliﬁcation of Reaction Networks, Conﬂuence and Elementary Modeshttp://www.mdpi.com/2079-3197/5/1/14
Reaction networks can be simpliﬁed by eliminating linear intermediate species in partial steadystates. Inthispaper,westudythequestionwhetherthisrewriteprocedureisconﬂuent,so that for any given reaction network with kinetic constraints, a unique normal form will be obtained independently of the elimination order. We ﬁrst show that conﬂuence fails for the elimination of intermediates even without kinetics, if “dependent reactions” introduced by the simpliﬁcation are not removed. This leads us to revising the simpliﬁcation algorithm into a variant of the double description method for computing elementary modes, so that it keeps track of kinetic information. Folklore results on elementary modes imply the conﬂuence of the revised simpliﬁcation algorithm with respect to the network structure, i.e., the structure of fully simpliﬁed networks is unique. We show, however, that the kinetic rates assigned to the reactions may not be unique, and provide a biological example where two different simpliﬁed networks can be obtained. Finally, we give a criterion on the structure of the initial network that is sufﬁcient to guarantee the conﬂuence of both the structure and the kinetic rates.Computation, Vol. 5, Pages 14: Simpliﬁcation of Reaction Networks, Conﬂuence and Elementary Modes

Reaction networks can be simpliﬁed by eliminating linear intermediate species in partial steadystates. Inthispaper,westudythequestionwhetherthisrewriteprocedureisconﬂuent,so that for any given reaction network with kinetic constraints, a unique normal form will be obtained independently of the elimination order. We ﬁrst show that conﬂuence fails for the elimination of intermediates even without kinetics, if “dependent reactions” introduced by the simpliﬁcation are not removed. This leads us to revising the simpliﬁcation algorithm into a variant of the double description method for computing elementary modes, so that it keeps track of kinetic information. Folklore results on elementary modes imply the conﬂuence of the revised simpliﬁcation algorithm with respect to the network structure, i.e., the structure of fully simpliﬁed networks is unique. We show, however, that the kinetic rates assigned to the reactions may not be unique, and provide a biological example where two different simpliﬁed networks can be obtained. Finally, we give a criterion on the structure of the initial network that is sufﬁcient to guarantee the conﬂuence of both the structure and the kinetic rates.

]]>Simpliﬁcation of Reaction Networks, Conﬂuence and Elementary ModesGuillaume MadelaineElisa TonelloCédric LhoussaineJoachim Niehrendoi: 10.3390/computation5010014Computation2017-03-10Computation2017-03-1051Article1410.3390/computation5010014http://www.mdpi.com/2079-3197/5/1/14Computation, Vol. 5, Pages 15: Schrödinger Theory of Electrons in Electromagnetic Fields: New Perspectiveshttp://www.mdpi.com/2079-3197/5/1/15
The Schrödinger theory of electrons in an external electromagnetic field is described from the new perspective of the individual electron. The perspective is arrived at via the time-dependent “Quantal Newtonian” law (or differential virial theorem). (The time-independent law, a special case, provides a similar description of stationary-state theory). These laws are in terms of “classical” fields whose sources are quantal expectations of Hermitian operators taken with respect to the wave function. The laws reveal the following physics: (a) in addition to the external field, each electron experiences an internal field whose components are representative of a specific property of the system such as the correlations due to the Pauli exclusion principle and Coulomb repulsion, the electron density, kinetic effects, and an internal magnetic field component. The response of the electron is described by the current density field; (b) the scalar potential energy of an electron is the work done in a conservative field. It is thus path-independent. The conservative field is the sum of the internal and Lorentz fields. Hence, the potential is inherently related to the properties of the system, and its constituent property-related components known. As the sources of the fields are functionals of the wave function, so are the respective fields, and, therefore, the scalar potential is a known functional of the wave function; (c) as such, the system Hamiltonian is a known functional of the wave function. This reveals the intrinsic self-consistent nature of the Schrödinger equation, thereby providing a path for the determination of the exact wave functions and energies of the system; (d) with the Schrödinger equation written in self-consistent form, the Hamiltonian now admits via the Lorentz field a new term that explicitly involves the external magnetic field. The new understandings are explicated for the stationary state case by application to two quantum dots in a magnetostatic field, one in a ground state and the other in an excited state. For the time-dependent case, the evolution of the same states of the quantum dots in both a magnetostatic and a time-dependent electric field is described. In each case, the satisfaction of the corresponding “Quantal Newtonian” law is demonstrated.Computation, Vol. 5, Pages 15: Schrödinger Theory of Electrons in Electromagnetic Fields: New Perspectives

The Schrödinger theory of electrons in an external electromagnetic field is described from the new perspective of the individual electron. The perspective is arrived at via the time-dependent “Quantal Newtonian” law (or differential virial theorem). (The time-independent law, a special case, provides a similar description of stationary-state theory). These laws are in terms of “classical” fields whose sources are quantal expectations of Hermitian operators taken with respect to the wave function. The laws reveal the following physics: (a) in addition to the external field, each electron experiences an internal field whose components are representative of a specific property of the system such as the correlations due to the Pauli exclusion principle and Coulomb repulsion, the electron density, kinetic effects, and an internal magnetic field component. The response of the electron is described by the current density field; (b) the scalar potential energy of an electron is the work done in a conservative field. It is thus path-independent. The conservative field is the sum of the internal and Lorentz fields. Hence, the potential is inherently related to the properties of the system, and its constituent property-related components known. As the sources of the fields are functionals of the wave function, so are the respective fields, and, therefore, the scalar potential is a known functional of the wave function; (c) as such, the system Hamiltonian is a known functional of the wave function. This reveals the intrinsic self-consistent nature of the Schrödinger equation, thereby providing a path for the determination of the exact wave functions and energies of the system; (d) with the Schrödinger equation written in self-consistent form, the Hamiltonian now admits via the Lorentz field a new term that explicitly involves the external magnetic field. The new understandings are explicated for the stationary state case by application to two quantum dots in a magnetostatic field, one in a ground state and the other in an excited state. For the time-dependent case, the evolution of the same states of the quantum dots in both a magnetostatic and a time-dependent electric field is described. In each case, the satisfaction of the corresponding “Quantal Newtonian” law is demonstrated.

]]>Schrödinger Theory of Electrons in Electromagnetic Fields: New PerspectivesViraht SahniXiao-Yin Pandoi: 10.3390/computation5010015Computation2017-03-09Computation2017-03-0951Article1510.3390/computation5010015http://www.mdpi.com/2079-3197/5/1/15Computation, Vol. 5, Pages 13: Aerodynamic Performance of a NREL S809 Airfoil in an Air-Sand Particle Two-Phase Flowhttp://www.mdpi.com/2079-3197/5/1/13
This paper opens up a new perspective on the aerodynamic performance of a wind turbine airfoil. More specifically, the paper deals with a steady, incompressible two-phase flow, consisting of air and two different concentrations of sand particles, over an airfoil from the National Renewable Energy Laboratory, NREL S809. The numerical simulations were performed on turbulence models for aerodynamic operations using commercial computational fluid dynamics (CFD) code. The computational results obtained for the aerodynamic performance of an S809 airfoil at various angles of attack operating at Reynolds numbers of Re = 1 × 106 and Re = 2 × 106 in a dry, dusty environment were compared with existing experimental data on air flow over an S809 airfoil from reliable sources. Notably, a structured mesh consisting of 80,000 cells had already been identified as the most appropriate for numerical simulations. Finally, it was concluded that sand concentration significantly affected the aerodynamic performance of the airfoil; there was an increase in the values of the predicted drag coefficients, as well as a decrease in the values of the predicted lift coefficients caused by increasing concentrations of sand particles. The region around the airfoil was studied by using contours of static pressure and discrete phase model (DPM) concentration.Computation, Vol. 5, Pages 13: Aerodynamic Performance of a NREL S809 Airfoil in an Air-Sand Particle Two-Phase Flow

This paper opens up a new perspective on the aerodynamic performance of a wind turbine airfoil. More specifically, the paper deals with a steady, incompressible two-phase flow, consisting of air and two different concentrations of sand particles, over an airfoil from the National Renewable Energy Laboratory, NREL S809. The numerical simulations were performed on turbulence models for aerodynamic operations using commercial computational fluid dynamics (CFD) code. The computational results obtained for the aerodynamic performance of an S809 airfoil at various angles of attack operating at Reynolds numbers of Re = 1 × 106 and Re = 2 × 106 in a dry, dusty environment were compared with existing experimental data on air flow over an S809 airfoil from reliable sources. Notably, a structured mesh consisting of 80,000 cells had already been identified as the most appropriate for numerical simulations. Finally, it was concluded that sand concentration significantly affected the aerodynamic performance of the airfoil; there was an increase in the values of the predicted drag coefficients, as well as a decrease in the values of the predicted lift coefficients caused by increasing concentrations of sand particles. The region around the airfoil was studied by using contours of static pressure and discrete phase model (DPM) concentration.

]]>Aerodynamic Performance of a NREL S809 Airfoil in an Air-Sand Particle Two-Phase FlowDimitra DouviDionissios MargarisAristeidis Davarisdoi: 10.3390/computation5010013Computation2017-02-28Computation2017-02-2851Article1310.3390/computation5010013http://www.mdpi.com/2079-3197/5/1/13Computation, Vol. 5, Pages 12: Numerical Modelling of Double-Steel Plate Composite Shear Wallshttp://www.mdpi.com/2079-3197/5/1/12
Double-steel plate concrete composite shear walls are being used for nuclear plants and high-rise buildings. They consist of thick concrete walls, exterior steel faceplates serving as reinforcement and shear connectors, which guarantee the composite action between the two different materials. Several researchers have used the Finite Element Method to investigate the behaviour of double-steel plate concrete walls. The majority of them model every element explicitly leading to a rather time-consuming solution, which cannot be easily used for design purposes. In the present paper, the main objective is the introduction of a three-dimensional finite element model, which can efficiently predict the overall performance of a double-steel plate concrete wall in terms of accuracy and time saving. At first, empirical formulations and design relations established in current design codes for shear connectors are evaluated. Then, a simplified finite element model is used to investigate the nonlinear response of composite walls. The developed model is validated using results from tests reported in the literature in terms of axial compression and monotonic, cyclic in-plane shear loading. Several finite element modelling issues related to potential convergence problems, loading strategies and computer efficiency are also discussed. The accuracy and simplicity of the proposed model make it suitable for further numerical studies on the shear connection behaviour at the steel-concrete interface.Computation, Vol. 5, Pages 12: Numerical Modelling of Double-Steel Plate Composite Shear Walls

Double-steel plate concrete composite shear walls are being used for nuclear plants and high-rise buildings. They consist of thick concrete walls, exterior steel faceplates serving as reinforcement and shear connectors, which guarantee the composite action between the two different materials. Several researchers have used the Finite Element Method to investigate the behaviour of double-steel plate concrete walls. The majority of them model every element explicitly leading to a rather time-consuming solution, which cannot be easily used for design purposes. In the present paper, the main objective is the introduction of a three-dimensional finite element model, which can efficiently predict the overall performance of a double-steel plate concrete wall in terms of accuracy and time saving. At first, empirical formulations and design relations established in current design codes for shear connectors are evaluated. Then, a simplified finite element model is used to investigate the nonlinear response of composite walls. The developed model is validated using results from tests reported in the literature in terms of axial compression and monotonic, cyclic in-plane shear loading. Several finite element modelling issues related to potential convergence problems, loading strategies and computer efficiency are also discussed. The accuracy and simplicity of the proposed model make it suitable for further numerical studies on the shear connection behaviour at the steel-concrete interface.

]]>Numerical Modelling of Double-Steel Plate Composite Shear WallsMichaela ElmatzoglouAris Avdelasdoi: 10.3390/computation5010012Computation2017-02-22Computation2017-02-2251Article1210.3390/computation5010012http://www.mdpi.com/2079-3197/5/1/12Computation, Vol. 5, Pages 11: Multiscale CT-Based Computational Modeling of Alveolar Gas Exchange during Artificial Lung Ventilation, Cluster (Biot) and Periodic (Cheyne-Stokes) Breathings and Bronchial Asthma Attackhttp://www.mdpi.com/2079-3197/5/1/11
An airflow in the first four generations of the tracheobronchial tree was simulated by the 1D model of incompressible fluid flow through the network of the elastic tubes coupled with 0D models of lumped alveolar components, which aggregates parts of the alveolar volume and smaller airways, extended with convective transport model throughout the lung and alveolar components which were combined with the model of oxygen and carbon dioxide transport between the alveolar volume and the averaged blood compartment during pathological respiratory conditions. The novel features of this work are 1D reconstruction of the tracheobronchial tree structure on the basis of 3D segmentation of the computed tomography (CT) data; 1D−0D coupling of the models of 1D bronchial tube and 0D alveolar components; and the alveolar gas exchange model. The results of our simulations include mechanical ventilation, breathing patterns of severely ill patients with the cluster (Biot) and periodic (Cheyne-Stokes) respirations and bronchial asthma attack. The suitability of the proposed mathematical model was validated. Carbon dioxide elimination efficiency was analyzed in all these cases. In the future, these results might be integrated into research and practical studies aimed to design cyberbiological systems for remote real-time monitoring, classification, prediction of breathing patterns and alveolar gas exchange for patients with breathing problems.Computation, Vol. 5, Pages 11: Multiscale CT-Based Computational Modeling of Alveolar Gas Exchange during Artificial Lung Ventilation, Cluster (Biot) and Periodic (Cheyne-Stokes) Breathings and Bronchial Asthma Attack

An airflow in the first four generations of the tracheobronchial tree was simulated by the 1D model of incompressible fluid flow through the network of the elastic tubes coupled with 0D models of lumped alveolar components, which aggregates parts of the alveolar volume and smaller airways, extended with convective transport model throughout the lung and alveolar components which were combined with the model of oxygen and carbon dioxide transport between the alveolar volume and the averaged blood compartment during pathological respiratory conditions. The novel features of this work are 1D reconstruction of the tracheobronchial tree structure on the basis of 3D segmentation of the computed tomography (CT) data; 1D−0D coupling of the models of 1D bronchial tube and 0D alveolar components; and the alveolar gas exchange model. The results of our simulations include mechanical ventilation, breathing patterns of severely ill patients with the cluster (Biot) and periodic (Cheyne-Stokes) respirations and bronchial asthma attack. The suitability of the proposed mathematical model was validated. Carbon dioxide elimination efficiency was analyzed in all these cases. In the future, these results might be integrated into research and practical studies aimed to design cyberbiological systems for remote real-time monitoring, classification, prediction of breathing patterns and alveolar gas exchange for patients with breathing problems.

]]>Multiscale CT-Based Computational Modeling of Alveolar Gas Exchange during Artificial Lung Ventilation, Cluster (Biot) and Periodic (Cheyne-Stokes) Breathings and Bronchial Asthma AttackAndrey GolovSergey SimakovYan SoeRoman PryamonosovOspan MynbaevAlexander Kholodovdoi: 10.3390/computation5010011Computation2017-02-18Computation2017-02-1851Article1110.3390/computation5010011http://www.mdpi.com/2079-3197/5/1/11Computation, Vol. 5, Pages 10: Virtual Prototyping and Validation of Cpps within a New Software Frameworkhttp://www.mdpi.com/2079-3197/5/1/10
As a result of the growing demand for highly customized and individual products, companies need to enable flexible and intelligent manufacturing. Cyber-physical production systems (CPPS) will act autonomously in the future in an interlinked production and enable such flexibility. However, German mid-sized plant manufacturers rarely use virtual technologies for design and validation in order to design CPPS. The research project Virtual Commissioning with Smart Hybrid Prototyping (VIB-SHP) investigated the usage of virtual technologies for manufacturing systems and CPPS design. Aspects of asynchronous communicating, intelligent- and autonomous-acting production equipment in an immersive validation environment, have been investigated. To enable manufacturing system designers to validate CPPS, a software framework for virtual prototyping has been developed. A mechatronic construction kit for production system design integrates discipline-specific models and manages them in a product lifecycle management (PLM) solution. With this construction kit manufacturing designers are able to apply virtual technologies and the validation of communication processes with the help of behavior models. The presented approach resolves the sequential design process for the development of mechanical, electrical, and software elements and ensures the consistency of these models. With the help of a bill of material (BOM)- and signal-based alignment of the discipline-specific models in an integrated mechatronic product model, the communication of the design status and changes are improved. The re-use of already-specified and -designed modules enable quick behavior modeling, code evaluation, as well as interaction with the virtualized assembly system in an immersive environment.Computation, Vol. 5, Pages 10: Virtual Prototyping and Validation of Cpps within a New Software Framework

As a result of the growing demand for highly customized and individual products, companies need to enable flexible and intelligent manufacturing. Cyber-physical production systems (CPPS) will act autonomously in the future in an interlinked production and enable such flexibility. However, German mid-sized plant manufacturers rarely use virtual technologies for design and validation in order to design CPPS. The research project Virtual Commissioning with Smart Hybrid Prototyping (VIB-SHP) investigated the usage of virtual technologies for manufacturing systems and CPPS design. Aspects of asynchronous communicating, intelligent- and autonomous-acting production equipment in an immersive validation environment, have been investigated. To enable manufacturing system designers to validate CPPS, a software framework for virtual prototyping has been developed. A mechatronic construction kit for production system design integrates discipline-specific models and manages them in a product lifecycle management (PLM) solution. With this construction kit manufacturing designers are able to apply virtual technologies and the validation of communication processes with the help of behavior models. The presented approach resolves the sequential design process for the development of mechanical, electrical, and software elements and ensures the consistency of these models. With the help of a bill of material (BOM)- and signal-based alignment of the discipline-specific models in an integrated mechatronic product model, the communication of the design status and changes are improved. The re-use of already-specified and -designed modules enable quick behavior modeling, code evaluation, as well as interaction with the virtualized assembly system in an immersive environment.

]]>Virtual Prototyping and Validation of Cpps within a New Software FrameworkSebastian NeumeyerKonrad ExnerSimon KindHaygazun HaykaRainer Starkdoi: 10.3390/computation5010010Computation2017-02-18Computation2017-02-1851Article1010.3390/computation5010010http://www.mdpi.com/2079-3197/5/1/10Computation, Vol. 5, Pages 9: Excitons in Solids from Time-Dependent Density-Functional Theory: Assessing the Tamm-Dancoff Approximationhttp://www.mdpi.com/2079-3197/5/1/9
Excitonic effects in solids can be calculated using the Bethe-Salpeter equation (BSE) or the Casida equation of time-dependent density-functional theory (TDDFT). In both methods, the Tamm-Dancoff approximation (TDA), which decouples excitations and de-excitations, is widely used to reduce computational cost. Here, we study the effect of the TDA on exciton binding energies of solids obtained from the Casida equation using long-range-corrected (LRC) exchange-correlation kernels. We find that the TDA underestimates TDDFT-LRC exciton binding energies of semiconductors slightly, but those of insulators significantly (i.e., by more than 100%), and thus it is essential to solve the full Casida equation to describe strongly bound excitons. These findings are relevant in the ongoing search for accurate and efficient TDDFT approaches for excitons.Computation, Vol. 5, Pages 9: Excitons in Solids from Time-Dependent Density-Functional Theory: Assessing the Tamm-Dancoff Approximation

Excitonic effects in solids can be calculated using the Bethe-Salpeter equation (BSE) or the Casida equation of time-dependent density-functional theory (TDDFT). In both methods, the Tamm-Dancoff approximation (TDA), which decouples excitations and de-excitations, is widely used to reduce computational cost. Here, we study the effect of the TDA on exciton binding energies of solids obtained from the Casida equation using long-range-corrected (LRC) exchange-correlation kernels. We find that the TDA underestimates TDDFT-LRC exciton binding energies of semiconductors slightly, but those of insulators significantly (i.e., by more than 100%), and thus it is essential to solve the full Casida equation to describe strongly bound excitons. These findings are relevant in the ongoing search for accurate and efficient TDDFT approaches for excitons.

]]>Excitons in Solids from Time-Dependent Density-Functional Theory: Assessing the Tamm-Dancoff ApproximationYoung-Moo ByunCarsten Ullrichdoi: 10.3390/computation5010009Computation2017-01-29Computation2017-01-2951Article910.3390/computation5010009http://www.mdpi.com/2079-3197/5/1/9Computation, Vol. 5, Pages 8: Numerical and Computational Analysis of a New Vertical Axis Wind Turbine, Named KIONAShttp://www.mdpi.com/2079-3197/5/1/8
This paper concentrates on a new configuration for a wind turbine, named KIONAS. The main purpose is to determine the performance and aerodynamic behavior of KIONAS, which is a vertical axis wind turbine with a stator over the rotor and a special feature in that it can consist of several stages. Notably, the stator is shaped in such a way that it increases the velocity of the air impacting the rotor blades. Moreover, each stage’s performance can be increased with the increase of the total number of stages. The effects of wind velocity, the various numbers of inclined rotor blades, the rotor diameter, the stator’s shape and the number of stages on the performance of KIONAS were studied. A FORTRAN code was developed in order to predict the power in several cases by solving the equations of continuity and momentum. Subsequently, further knowledge on the flow field was obtained by using a commercial Computational Fluid Dynamics code. Based on the results, it can be concluded that higher wind velocities and a greater number of blades produce more power. Furthermore, higher performance was found for a stator with curved guide vanes and for a KIONAS configuration with more stages.Computation, Vol. 5, Pages 8: Numerical and Computational Analysis of a New Vertical Axis Wind Turbine, Named KIONAS

This paper concentrates on a new configuration for a wind turbine, named KIONAS. The main purpose is to determine the performance and aerodynamic behavior of KIONAS, which is a vertical axis wind turbine with a stator over the rotor and a special feature in that it can consist of several stages. Notably, the stator is shaped in such a way that it increases the velocity of the air impacting the rotor blades. Moreover, each stage’s performance can be increased with the increase of the total number of stages. The effects of wind velocity, the various numbers of inclined rotor blades, the rotor diameter, the stator’s shape and the number of stages on the performance of KIONAS were studied. A FORTRAN code was developed in order to predict the power in several cases by solving the equations of continuity and momentum. Subsequently, further knowledge on the flow field was obtained by using a commercial Computational Fluid Dynamics code. Based on the results, it can be concluded that higher wind velocities and a greater number of blades produce more power. Furthermore, higher performance was found for a stator with curved guide vanes and for a KIONAS configuration with more stages.

]]>Numerical and Computational Analysis of a New Vertical Axis Wind Turbine, Named KIONASEleni DouviDimitra DouviDionissios MargarisIoannis Drosisdoi: 10.3390/computation5010008Computation2017-01-11Computation2017-01-1151Article810.3390/computation5010008http://www.mdpi.com/2079-3197/5/1/8Computation, Vol. 5, Pages 7: Acknowledgement to Reviewers of Computation in 2016http://www.mdpi.com/2079-3197/5/1/7
The editors of Computation would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]Computation, Vol. 5, Pages 7: Acknowledgement to Reviewers of Computation in 2016

The editors of Computation would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]

]]>Acknowledgement to Reviewers of Computation in 2016 Computation Editorial Officedoi: 10.3390/computation5010007Computation2017-01-10Computation2017-01-1051Editorial710.3390/computation5010007http://www.mdpi.com/2079-3197/5/1/7Computation, Vol. 5, Pages 6: Towards a Multiscale Model of Acute HIV Infectionhttp://www.mdpi.com/2079-3197/5/1/6
Human Immunodeficiency Virus (HIV) infection of humans represents a complex biological system and a great challenge to public health. Novel approaches for the analysis and prediction of the infection dynamics based on a multi-scale integration of virus ontogeny and immune reactions are needed to deal with the systems’ complexity. The aim of our study is: (1) to formulate a multi-scale mathematical model of HIV infection; (2) to implement the model computationally following a hybrid approach; and (3) to calibrate the model by estimating the parameter values enabling one to reproduce the “standard” observed dynamics of HIV infection in blood during the acute phase of primary infection. The modeling approach integrates the processes of infection spread and immune responses in Lymph Nodes (LN) to that observed in blood. The spatio-temporal population dynamics of T lymphocytes in LN in response to HIV infection is governed by equations linking an intracellular regulation of the lymphocyte fate by intercellular cytokine fields. We describe the balance of proliferation, differentiation and death at a single cell level as a consequence of gene activation via multiple signaling pathways activated by IL-2, IFNa and FasL. Distinct activation thresholds are used in the model to relate different modes of cellular responses to the hierarchy of the relative levels of the cytokines. We specify a reference set of model parameter values for the fundamental processes in lymph nodes that ensures a reasonable agreement with viral load and CD4+ T cell dynamics in blood.Computation, Vol. 5, Pages 6: Towards a Multiscale Model of Acute HIV Infection

Human Immunodeficiency Virus (HIV) infection of humans represents a complex biological system and a great challenge to public health. Novel approaches for the analysis and prediction of the infection dynamics based on a multi-scale integration of virus ontogeny and immune reactions are needed to deal with the systems’ complexity. The aim of our study is: (1) to formulate a multi-scale mathematical model of HIV infection; (2) to implement the model computationally following a hybrid approach; and (3) to calibrate the model by estimating the parameter values enabling one to reproduce the “standard” observed dynamics of HIV infection in blood during the acute phase of primary infection. The modeling approach integrates the processes of infection spread and immune responses in Lymph Nodes (LN) to that observed in blood. The spatio-temporal population dynamics of T lymphocytes in LN in response to HIV infection is governed by equations linking an intracellular regulation of the lymphocyte fate by intercellular cytokine fields. We describe the balance of proliferation, differentiation and death at a single cell level as a consequence of gene activation via multiple signaling pathways activated by IL-2, IFNa and FasL. Distinct activation thresholds are used in the model to relate different modes of cellular responses to the hierarchy of the relative levels of the cytokines. We specify a reference set of model parameter values for the fundamental processes in lymph nodes that ensures a reasonable agreement with viral load and CD4+ T cell dynamics in blood.

]]>Towards a Multiscale Model of Acute HIV InfectionAnass BouchnitaGennady BocharovAndreas MeyerhansVitaly Volpertdoi: 10.3390/computation5010006Computation2017-01-10Computation2017-01-1051Article610.3390/computation5010006http://www.mdpi.com/2079-3197/5/1/6Computation, Vol. 5, Pages 4: An SVM Framework for Malignant Melanoma Detection Based on Optimized HOG Featureshttp://www.mdpi.com/2079-3197/5/1/4
Early detection of skin cancer through improved techniques and innovative technologies has the greatest potential for significantly reducing both morbidity and mortality associated with this disease. In this paper, an effective framework of a CAD (Computer-Aided Diagnosis) system for melanoma skin cancer is developed mainly by application of an SVM (Support Vector Machine) model on an optimized set of HOG (Histogram of Oriented Gradient) based descriptors of skin lesions. Experimental results obtained by applying the presented methodology on a large, publicly accessible dataset of dermoscopy images demonstrate that the proposed framework is a strong contender for the state-of-the-art alternatives by achieving high levels of sensitivity, specificity, and accuracy (98.21%, 96.43% and 97.32%, respectively), without sacrificing computational soundness.Computation, Vol. 5, Pages 4: An SVM Framework for Malignant Melanoma Detection Based on Optimized HOG Features

Early detection of skin cancer through improved techniques and innovative technologies has the greatest potential for significantly reducing both morbidity and mortality associated with this disease. In this paper, an effective framework of a CAD (Computer-Aided Diagnosis) system for melanoma skin cancer is developed mainly by application of an SVM (Support Vector Machine) model on an optimized set of HOG (Histogram of Oriented Gradient) based descriptors of skin lesions. Experimental results obtained by applying the presented methodology on a large, publicly accessible dataset of dermoscopy images demonstrate that the proposed framework is a strong contender for the state-of-the-art alternatives by achieving high levels of sensitivity, specificity, and accuracy (98.21%, 96.43% and 97.32%, respectively), without sacrificing computational soundness.

]]>An SVM Framework for Malignant Melanoma Detection Based on Optimized HOG FeaturesSamy Bakheetdoi: 10.3390/computation5010004Computation2017-01-01Computation2017-01-0151Article410.3390/computation5010004http://www.mdpi.com/2079-3197/5/1/4Computation, Vol. 5, Pages 5: First Principle Modelling of Materials and Processes in Dye-Sensitized Photoanodes for Solar Energy and Solar Fuelshttp://www.mdpi.com/2079-3197/5/1/5
In the context of solar energy exploitation, dye-sensitized solar cells and dye-sensitized photoelectrosynthetic cells offer the promise of low-cost sunlight conversion and storage, respectively. In this perspective we discuss the main successes and limitations of modern computational methodologies, ranging from hybrid and long-range corrected density functionals, GW approaches and multi-reference perturbation theories, in describing the electronic and optical properties of isolated components and complex interfaces relevant to these devices. While computational modelling has had a crucial role in the development of the dye-sensitized solar cells technology, the theoretical characterization of the interface structure and interfacial processes in water splitting devices is still at its infancy, especially concerning the electron and hole transfer phenomena. Quantitative analysis of interfacial charge separation and recombination reactions in multiple metal-oxide/dye/catalyst heterointerfaces, thus, undoubtedly represents the compelling challenge in the field of modern computational material science.Computation, Vol. 5, Pages 5: First Principle Modelling of Materials and Processes in Dye-Sensitized Photoanodes for Solar Energy and Solar Fuels

In the context of solar energy exploitation, dye-sensitized solar cells and dye-sensitized photoelectrosynthetic cells offer the promise of low-cost sunlight conversion and storage, respectively. In this perspective we discuss the main successes and limitations of modern computational methodologies, ranging from hybrid and long-range corrected density functionals, GW approaches and multi-reference perturbation theories, in describing the electronic and optical properties of isolated components and complex interfaces relevant to these devices. While computational modelling has had a crucial role in the development of the dye-sensitized solar cells technology, the theoretical characterization of the interface structure and interfacial processes in water splitting devices is still at its infancy, especially concerning the electron and hole transfer phenomena. Quantitative analysis of interfacial charge separation and recombination reactions in multiple metal-oxide/dye/catalyst heterointerfaces, thus, undoubtedly represents the compelling challenge in the field of modern computational material science.

]]>First Principle Modelling of Materials and Processes in Dye-Sensitized Photoanodes for Solar Energy and Solar FuelsMariachiara Pastoredoi: 10.3390/computation5010005Computation2017-01-01Computation2017-01-0151Review510.3390/computation5010005http://www.mdpi.com/2079-3197/5/1/5Computation, Vol. 5, Pages 3: Critical Issues in Modelling Lymph Node Physiologyhttp://www.mdpi.com/2079-3197/5/1/3
In this study, we discuss critical issues in modelling the structure and function of lymph nodes (LNs), with emphasis on how LN physiology is related to its multi-scale structural organization. In addition to macroscopic domains such as B-cell follicles and the T cell zone, there are vascular networks which play a key role in the delivery of information to the inner parts of the LN, i.e., the conduit and blood microvascular networks. We propose object-oriented computational algorithms to model the 3D geometry of the fibroblastic reticular cell (FRC) network and the microvasculature. Assuming that a conduit cylinder is densely packed with collagen fibers, the computational flow study predicted that the diffusion should be a dominating process in mass transport than convective flow. The geometry models are used to analyze the lymph flow properties through the conduit network in unperturbed- and damaged states of the LN. The analysis predicts that elimination of up to 60%–90% of edges is required to stop the lymph flux. This result suggests a high degree of functional robustness of the network.Computation, Vol. 5, Pages 3: Critical Issues in Modelling Lymph Node Physiology

In this study, we discuss critical issues in modelling the structure and function of lymph nodes (LNs), with emphasis on how LN physiology is related to its multi-scale structural organization. In addition to macroscopic domains such as B-cell follicles and the T cell zone, there are vascular networks which play a key role in the delivery of information to the inner parts of the LN, i.e., the conduit and blood microvascular networks. We propose object-oriented computational algorithms to model the 3D geometry of the fibroblastic reticular cell (FRC) network and the microvasculature. Assuming that a conduit cylinder is densely packed with collagen fibers, the computational flow study predicted that the diffusion should be a dominating process in mass transport than convective flow. The geometry models are used to analyze the lymph flow properties through the conduit network in unperturbed- and damaged states of the LN. The analysis predicts that elimination of up to 60%–90% of edges is required to stop the lymph flux. This result suggests a high degree of functional robustness of the network.

]]>Critical Issues in Modelling Lymph Node PhysiologyDmitry GrebennikovRaoul van LoonMario NovkovicLucas OnderRostislav SavinkovIgor SazonovRufina TretyakovaDaniel WatsonGennady Bocharovdoi: 10.3390/computation5010003Computation2016-12-24Computation2016-12-2451Article310.3390/computation5010003http://www.mdpi.com/2079-3197/5/1/3Computation, Vol. 5, Pages 2: Power Conversion Efficiency of Arylamine Organic Dyes for Dye-Sensitized Solar Cells (DSSCs) Explicit to Cobalt Electrolyte: Understanding the Structural Attributes Using a Direct QSPR Approachhttp://www.mdpi.com/2079-3197/5/1/2
Post silicon solar cell era involves light-absorbing dyes for dye-sensitized solar systems (DSSCs). Therefore, there is great interest in the design of competent organic dyes for DSSCs with high power conversion efficiency (PCE) to bypass some of the disadvantages of silicon-based solar cell technologies, such as high cost, heavy weight, limited silicon resources, and production methods that lead to high environmental pollution. The DSSC has the unique feature of a distance-dependent electron transfer step. This depends on the relative position of the sensitized organic dye in the metal oxide composite system. In the present work, we developed quantitative structure-property relationship (QSPR) models to set up the quantitative relationship between the overall PCE and quantum chemical molecular descriptors. They were calculated from density functional theory (DFT) and time-dependent DFT (TD-DFT) methods as well as from DRAGON software. This allows for understanding the basic electron transfer mechanism along with the structural attributes of arylamine-organic dye sensitizers for the DSSCs explicit to cobalt electrolyte. The identified properties and structural fragments are particularly valuable for guiding time-saving synthetic efforts for development of efficient arylamine organic dyes with improved power conversion efficiency.Computation, Vol. 5, Pages 2: Power Conversion Efficiency of Arylamine Organic Dyes for Dye-Sensitized Solar Cells (DSSCs) Explicit to Cobalt Electrolyte: Understanding the Structural Attributes Using a Direct QSPR Approach

Post silicon solar cell era involves light-absorbing dyes for dye-sensitized solar systems (DSSCs). Therefore, there is great interest in the design of competent organic dyes for DSSCs with high power conversion efficiency (PCE) to bypass some of the disadvantages of silicon-based solar cell technologies, such as high cost, heavy weight, limited silicon resources, and production methods that lead to high environmental pollution. The DSSC has the unique feature of a distance-dependent electron transfer step. This depends on the relative position of the sensitized organic dye in the metal oxide composite system. In the present work, we developed quantitative structure-property relationship (QSPR) models to set up the quantitative relationship between the overall PCE and quantum chemical molecular descriptors. They were calculated from density functional theory (DFT) and time-dependent DFT (TD-DFT) methods as well as from DRAGON software. This allows for understanding the basic electron transfer mechanism along with the structural attributes of arylamine-organic dye sensitizers for the DSSCs explicit to cobalt electrolyte. The identified properties and structural fragments are particularly valuable for guiding time-saving synthetic efforts for development of efficient arylamine organic dyes with improved power conversion efficiency.

]]>Power Conversion Efficiency of Arylamine Organic Dyes for Dye-Sensitized Solar Cells (DSSCs) Explicit to Cobalt Electrolyte: Understanding the Structural Attributes Using a Direct QSPR ApproachSupratik KarJuganta RoyDanuta LeszczynskaJerzy Leszczynskidoi: 10.3390/computation5010002Computation2016-12-23Computation2016-12-2351Article210.3390/computation5010002http://www.mdpi.com/2079-3197/5/1/2Computation, Vol. 5, Pages 1: Application of the Recursive Finite Element Approach on 2D Periodic Structures under Harmonic Vibrationshttp://www.mdpi.com/2079-3197/5/1/1
The frequency response function is a quantitative measure used in structural analysis and engineering design; hence, it is targeted for accuracy. For a large structure, a high number of substructures, also called cells, must be considered, which will lead to a high amount of computational time. In this paper, the recursive method, a finite element method, is used for computing the frequency response function, independent of the number of cells with much lesser time costs. The fundamental principle is eliminating the internal degrees of freedom that are at the interface between a cell and its succeeding one. The method is applied solely for free (no load) nodes. Based on the boundary and interior degrees of freedom, the global dynamic stiffness matrix is computed by means of products and inverses resulting with a dimension the same as that for one cell. The recursive method is demonstrated on periodic structures (cranes and buildings) under harmonic vibrations. The method yielded a satisfying time decrease with a maximum time ratio of 1 18 and a percentage difference of 19%, in comparison with the conventional finite element method. Close values were attained at low and very high frequencies; the analysis is supported for two types of materials (steel and plastic). The method maintained its efficiency with a high number of forces, excluding the case when all of the nodes are under loads.Computation, Vol. 5, Pages 1: Application of the Recursive Finite Element Approach on 2D Periodic Structures under Harmonic Vibrations

The frequency response function is a quantitative measure used in structural analysis and engineering design; hence, it is targeted for accuracy. For a large structure, a high number of substructures, also called cells, must be considered, which will lead to a high amount of computational time. In this paper, the recursive method, a finite element method, is used for computing the frequency response function, independent of the number of cells with much lesser time costs. The fundamental principle is eliminating the internal degrees of freedom that are at the interface between a cell and its succeeding one. The method is applied solely for free (no load) nodes. Based on the boundary and interior degrees of freedom, the global dynamic stiffness matrix is computed by means of products and inverses resulting with a dimension the same as that for one cell. The recursive method is demonstrated on periodic structures (cranes and buildings) under harmonic vibrations. The method yielded a satisfying time decrease with a maximum time ratio of 1 18 and a percentage difference of 19%, in comparison with the conventional finite element method. Close values were attained at low and very high frequencies; the analysis is supported for two types of materials (steel and plastic). The method maintained its efficiency with a high number of forces, excluding the case when all of the nodes are under loads.

]]>Application of the Recursive Finite Element Approach on 2D Periodic Structures under Harmonic VibrationsReem YassineFaten SalmanAli Al ShaerMohammad HammoudDenis Duhameldoi: 10.3390/computation5010001Computation2016-12-22Computation2016-12-2251Article110.3390/computation5010001http://www.mdpi.com/2079-3197/5/1/1Computation, Vol. 4, Pages 46: Effect of Pore Structure on Soot Deposition in Diesel Particulate Filterhttp://www.mdpi.com/2079-3197/4/4/46
Nowadays, in the after-treatment of diesel exhaust gas, a diesel particulate filter (DPF) has been used to trap nano-particles of the diesel soot. However, as there are more particles inside the filter, the pressure which corresponds to the filter backpressure increases, which worsens the fuel consumption rate, together with the abatement of the available torque. Thus, a filter with lower backpressure would be needed. To achieve this, it is necessary to utilize the information on the phenomena including both the soot transport and its removal inside the DPF, and optimize the filter substrate structure. In this paper, to obtain useful information for optimization of the filter structure, we tested seven filters with different porosities and pore sizes. The porosity and pore size were changed systematically. To consider the soot filtration, the particle-laden flow was simulated by a lattice Boltzmann method (LBM). Then, the flow field and the pressure change were discussed during the filtration process.Computation, Vol. 4, Pages 46: Effect of Pore Structure on Soot Deposition in Diesel Particulate Filter

Nowadays, in the after-treatment of diesel exhaust gas, a diesel particulate filter (DPF) has been used to trap nano-particles of the diesel soot. However, as there are more particles inside the filter, the pressure which corresponds to the filter backpressure increases, which worsens the fuel consumption rate, together with the abatement of the available torque. Thus, a filter with lower backpressure would be needed. To achieve this, it is necessary to utilize the information on the phenomena including both the soot transport and its removal inside the DPF, and optimize the filter substrate structure. In this paper, to obtain useful information for optimization of the filter structure, we tested seven filters with different porosities and pore sizes. The porosity and pore size were changed systematically. To consider the soot filtration, the particle-laden flow was simulated by a lattice Boltzmann method (LBM). Then, the flow field and the pressure change were discussed during the filtration process.

]]>Effect of Pore Structure on Soot Deposition in Diesel Particulate FilterKazuhiro YamamotoTatsuya Sakaidoi: 10.3390/computation4040046Computation2016-12-02Computation2016-12-0244Article4610.3390/computation4040046http://www.mdpi.com/2079-3197/4/4/46Computation, Vol. 4, Pages 45: Special Issue “50th Anniversary of the Kohn–Sham Theory—Advances in Density Functional Theory”http://www.mdpi.com/2079-3197/4/4/45
The properties of many materials at the atomic scale depend on the electronic structure, which requires a quantum mechanical treatment. The most widely used approach to make such a treatment feasible is density functional theory (DFT), the advances in which were presented and discussed during the DFT conference in Debrecen. Some of these issues are presented in this Special Issue.Computation, Vol. 4, Pages 45: Special Issue “50th Anniversary of the Kohn–Sham Theory—Advances in Density Functional Theory”

The properties of many materials at the atomic scale depend on the electronic structure, which requires a quantum mechanical treatment. The most widely used approach to make such a treatment feasible is density functional theory (DFT), the advances in which were presented and discussed during the DFT conference in Debrecen. Some of these issues are presented in this Special Issue.

]]>Special Issue “50th Anniversary of the Kohn–Sham Theory—Advances in Density Functional Theory”Ágnes NagyKarlheinz Schwarzdoi: 10.3390/computation4040045Computation2016-11-22Computation2016-11-2244Editorial4510.3390/computation4040045http://www.mdpi.com/2079-3197/4/4/45Computation, Vol. 4, Pages 43: A Theoretical Study of One- and Two-Photon Activity of D-Luciferinhttp://www.mdpi.com/2079-3197/4/4/43
In the present work, we have theoretically studied the one and two-photon absorption (OPA and TPA) probabilities of the native D-luciferin molecule and attempted to find the origin of its larger TPA cross-sections in polar solvents than in non-polar ones. The calculations using state-of-the-art linear and quadratic response theory in the framework of time-dependent density functional theory using hybrid B3LYP functional and cc-pVDZ basis set suggests that two-photon transition probability of this molecule increases with increasing solvent polarity. In order to explicate our present findings, we employed the generalized few-state-model and inspected the role of different optical channels related to the TPA process. We have found that the two-photon transition probability is always guided by a destructive interference term, the magnitude of which decreases with increasing solvent polarity. Furthermore, we have evaluated OPA parameters of D-luciferin and noticed that the the excitation energy is in very good agreement with the available experimental results.Computation, Vol. 4, Pages 43: A Theoretical Study of One- and Two-Photon Activity of D-Luciferin

In the present work, we have theoretically studied the one and two-photon absorption (OPA and TPA) probabilities of the native D-luciferin molecule and attempted to find the origin of its larger TPA cross-sections in polar solvents than in non-polar ones. The calculations using state-of-the-art linear and quadratic response theory in the framework of time-dependent density functional theory using hybrid B3LYP functional and cc-pVDZ basis set suggests that two-photon transition probability of this molecule increases with increasing solvent polarity. In order to explicate our present findings, we employed the generalized few-state-model and inspected the role of different optical channels related to the TPA process. We have found that the two-photon transition probability is always guided by a destructive interference term, the magnitude of which decreases with increasing solvent polarity. Furthermore, we have evaluated OPA parameters of D-luciferin and noticed that the the excitation energy is in very good agreement with the available experimental results.

]]>A Theoretical Study of One- and Two-Photon Activity of D-LuciferinMausumi ChattopadhyayaMd. Alamdoi: 10.3390/computation4040043Computation2016-11-17Computation2016-11-1744Article4310.3390/computation4040043http://www.mdpi.com/2079-3197/4/4/43Computation, Vol. 4, Pages 44: Mathematical Model of a Lithium-Bromide/Water Absorption Refrigeration System Equipped with an Adiabatic Absorberhttp://www.mdpi.com/2079-3197/4/4/44
The objective of this paper is to develop a mathematical model for thermodynamic analysis of an absorption refrigeration system equipped with an adiabatic absorber using a lithium-bromide/water (LiBr/water) pair as the working fluid. The working temperature of the generator, adiabatic absorber, condenser, evaporator, the cooling capacity of the system, and the ratio of the solution mass flow rate at the circulation pump to that at the solution pump are used as input data. The model evaluates the thermodynamic properties of all state points, the heat transfer in each component, the various mass flow rates, and the coefficient of performance (COP) of the cycle. The results are used to investigate the effect of key parameters on the overall performance of the system. For instance, increasing the generator temperatures and decreasing the adiabatic absorber temperatures can increase the COP of the cycle. The results of this mathematical model can be used for designing and sizing new LiBr/water absorption refrigeration systems equipped with an adiabatic absorber or for optimizing existing aforementioned systems.Computation, Vol. 4, Pages 44: Mathematical Model of a Lithium-Bromide/Water Absorption Refrigeration System Equipped with an Adiabatic Absorber

The objective of this paper is to develop a mathematical model for thermodynamic analysis of an absorption refrigeration system equipped with an adiabatic absorber using a lithium-bromide/water (LiBr/water) pair as the working fluid. The working temperature of the generator, adiabatic absorber, condenser, evaporator, the cooling capacity of the system, and the ratio of the solution mass flow rate at the circulation pump to that at the solution pump are used as input data. The model evaluates the thermodynamic properties of all state points, the heat transfer in each component, the various mass flow rates, and the coefficient of performance (COP) of the cycle. The results are used to investigate the effect of key parameters on the overall performance of the system. For instance, increasing the generator temperatures and decreasing the adiabatic absorber temperatures can increase the COP of the cycle. The results of this mathematical model can be used for designing and sizing new LiBr/water absorption refrigeration systems equipped with an adiabatic absorber or for optimizing existing aforementioned systems.

]]>Mathematical Model of a Lithium-Bromide/Water Absorption Refrigeration System Equipped with an Adiabatic AbsorberSalem Osta-OmarChristopher Micallefdoi: 10.3390/computation4040044Computation2016-11-17Computation2016-11-1744Article4410.3390/computation4040044http://www.mdpi.com/2079-3197/4/4/44Computation, Vol. 4, Pages 42: A Mathematical Spline-Based Model of Cardiac Left Ventricle Anatomy and Morphologyhttp://www.mdpi.com/2079-3197/4/4/42
Computer simulation of normal and diseased human heart activity requires a 3D anatomical model of the myocardium, including myofibers. For clinical applications, such a model has to be constructed based on routine methods of cardiac visualization, such as sonography. Symmetrical models are shown to be too rigid, so an analytical non-symmetrical model with enough flexibility is necessary. Based on previously-made anatomical models of the left ventricle, we propose a new, much more flexible spline-based analytical model. The model is fully described and verified against DT-MRI data. We show a way to construct it on the basis of sonography data. To use this model in further physiological simulations, we propose a numerical method to utilize finite differences in solving the reaction-diffusion problem together with an example of scroll wave dynamics simulation.Computation, Vol. 4, Pages 42: A Mathematical Spline-Based Model of Cardiac Left Ventricle Anatomy and Morphology

Computer simulation of normal and diseased human heart activity requires a 3D anatomical model of the myocardium, including myofibers. For clinical applications, such a model has to be constructed based on routine methods of cardiac visualization, such as sonography. Symmetrical models are shown to be too rigid, so an analytical non-symmetrical model with enough flexibility is necessary. Based on previously-made anatomical models of the left ventricle, we propose a new, much more flexible spline-based analytical model. The model is fully described and verified against DT-MRI data. We show a way to construct it on the basis of sonography data. To use this model in further physiological simulations, we propose a numerical method to utilize finite differences in solving the reaction-diffusion problem together with an example of scroll wave dynamics simulation.

]]>A Mathematical Spline-Based Model of Cardiac Left Ventricle Anatomy and MorphologySergei Pravdindoi: 10.3390/computation4040042Computation2016-10-27Computation2016-10-2744Article4210.3390/computation4040042http://www.mdpi.com/2079-3197/4/4/42Computation, Vol. 4, Pages 41: Evaluation of External Memory Access Performance on a High-End FPGA Hybrid Computerhttp://www.mdpi.com/2079-3197/4/4/41
The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs). Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user) they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.Computation, Vol. 4, Pages 41: Evaluation of External Memory Access Performance on a High-End FPGA Hybrid Computer

The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs). Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user) they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.

]]>Evaluation of External Memory Access Performance on a High-End FPGA Hybrid ComputerKonstantinos KalaitzisEvripidis SotiriadisIoannis PapaefstathiouApostolos Dollasdoi: 10.3390/computation4040041Computation2016-10-25Computation2016-10-2544Article4110.3390/computation4040041http://www.mdpi.com/2079-3197/4/4/41Computation, Vol. 4, Pages 39: A Multi-Compartment Hybrid Computational Model Predicts Key Roles for Dendritic Cells in Tuberculosis Infectionhttp://www.mdpi.com/2079-3197/4/4/39
Tuberculosis (TB) is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB). The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs) is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs), take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung) to lung-draining lymph nodes (LNs) where they prime T cells to recognize Mtb. These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim) describing in silico immune cell, bacterial (Mtb) and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form), lung draining lymph node (LN, site of generation of adaptive immunity) and blood (a measurable compartment). Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM), while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs). In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs) in a computational fashion into GranSim. Results: The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. Conclusion: The main conclusion of this study is that early events after Mtb infection are critical to establishing a timely and effective response. Manipulating CD8+ and CD4+ T cell proliferation rates, as well as DC migration early on during infection can determine the difference between bacterial clearance vs. uncontrolled bacterial growth and dissemination.Computation, Vol. 4, Pages 39: A Multi-Compartment Hybrid Computational Model Predicts Key Roles for Dendritic Cells in Tuberculosis Infection

Tuberculosis (TB) is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB). The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs) is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs), take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung) to lung-draining lymph nodes (LNs) where they prime T cells to recognize Mtb. These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim) describing in silico immune cell, bacterial (Mtb) and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form), lung draining lymph node (LN, site of generation of adaptive immunity) and blood (a measurable compartment). Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM), while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs). In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs) in a computational fashion into GranSim. Results: The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. Conclusion: The main conclusion of this study is that early events after Mtb infection are critical to establishing a timely and effective response. Manipulating CD8+ and CD4+ T cell proliferation rates, as well as DC migration early on during infection can determine the difference between bacterial clearance vs. uncontrolled bacterial growth and dissemination.

Walter Kohn (Figure 1) is one of the most cited scientists of our time, who died on 19 April 2016 in Santa Barbara, CA, USA. [...]

]]>Obituary for Walter Kohn (1923–2016)Karlheinz SchwarzLu ShamAnn MattssonMatthias Schefflerdoi: 10.3390/computation4040040Computation2016-10-20Computation2016-10-2044Editorial4010.3390/computation4040040http://www.mdpi.com/2079-3197/4/4/40Computation, Vol. 4, Pages 38: Steady-State Anderson Accelerated Coupling of Lattice Boltzmann and Navier–Stokes Solvershttp://www.mdpi.com/2079-3197/4/4/38
We present an Anderson acceleration-based approach to spatially couple three-dimensional Lattice Boltzmann and Navier–Stokes (LBNS) flow simulations. This allows to locally exploit the computational features of both fluid flow solver approaches to the fullest extent and yields enhanced control to match the LB and NS degrees of freedom within the LBNS overlap layer. Designed for parallel Schwarz coupling, the Anderson acceleration allows for the simultaneous execution of both Lattice Boltzmann and Navier–Stokes solver. We detail our coupling methodology, validate it, and study convergence and accuracy of the Anderson accelerated coupling, considering three steady-state scenarios: plane channel flow, flow around a sphere and channel flow across a porous structure. We find that the Anderson accelerated coupling yields a speed-up (in terms of iteration steps) of up to 40% in the considered scenarios, compared to strictly sequential Schwarz coupling.Computation, Vol. 4, Pages 38: Steady-State Anderson Accelerated Coupling of Lattice Boltzmann and Navier–Stokes Solvers

We present an Anderson acceleration-based approach to spatially couple three-dimensional Lattice Boltzmann and Navier–Stokes (LBNS) flow simulations. This allows to locally exploit the computational features of both fluid flow solver approaches to the fullest extent and yields enhanced control to match the LB and NS degrees of freedom within the LBNS overlap layer. Designed for parallel Schwarz coupling, the Anderson acceleration allows for the simultaneous execution of both Lattice Boltzmann and Navier–Stokes solver. We detail our coupling methodology, validate it, and study convergence and accuracy of the Anderson accelerated coupling, considering three steady-state scenarios: plane channel flow, flow around a sphere and channel flow across a porous structure. We find that the Anderson accelerated coupling yields a speed-up (in terms of iteration steps) of up to 40% in the considered scenarios, compared to strictly sequential Schwarz coupling.

]]>Steady-State Anderson Accelerated Coupling of Lattice Boltzmann and Navier–Stokes SolversAtanas AtanasovBenjamin UekermannCarlos Pachajoa MejíaHans-Joachim BungartzPhilipp Neumanndoi: 10.3390/computation4040038Computation2016-10-17Computation2016-10-1744Article3810.3390/computation4040038http://www.mdpi.com/2079-3197/4/4/38Computation, Vol. 4, Pages 37: Computational Streetscapeshttp://www.mdpi.com/2079-3197/4/3/37
Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber-physical systems and new developments in urban computing and mobile computing.Computation, Vol. 4, Pages 37: Computational Streetscapes

Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber-physical systems and new developments in urban computing and mobile computing.

]]>Computational StreetscapesPaul Torrensdoi: 10.3390/computation4030037Computation2016-09-20Computation2016-09-2043Review3710.3390/computation4030037http://www.mdpi.com/2079-3197/4/3/37Computation, Vol. 4, Pages 36: An Extremely Efficient Boundary Element Method for Wave Interaction with Long Cylindrical Structures Based on Free-Surface Green’s Functionhttp://www.mdpi.com/2079-3197/4/3/36
The present study aims to develop an efficient numerical method for computing the diffraction and radiation of water waves with horizontal long cylindrical structures, such as floating breakwaters in the coastal region, etc. A higher-order scheme is used to discretize geometry of the structure as well as the physical wave potentials. As the kernel of this method, Wehausen’s free-surface Green function is calculated by a newly-developed Gauss–Kronrod adaptive quadrature algorithm after elimination of its Cauchy-type singularities. To improve its computation efficiency, an analytical solution is derived for a fast evaluation of the Green function that needs to be implemented thousands of times. In addition, the OpenMP parallelization technique is applied to the formation of the influence coefficient matrix, significantly reducing the running CPU time. Computations are performed on wave-exciting forces and hydrodynamic coefficients for the long cylindrical structures, either floating or submerged. Comparison with other numerical and analytical methods demonstrates a good performance of the present method.Computation, Vol. 4, Pages 36: An Extremely Efficient Boundary Element Method for Wave Interaction with Long Cylindrical Structures Based on Free-Surface Green’s Function

The present study aims to develop an efficient numerical method for computing the diffraction and radiation of water waves with horizontal long cylindrical structures, such as floating breakwaters in the coastal region, etc. A higher-order scheme is used to discretize geometry of the structure as well as the physical wave potentials. As the kernel of this method, Wehausen’s free-surface Green function is calculated by a newly-developed Gauss–Kronrod adaptive quadrature algorithm after elimination of its Cauchy-type singularities. To improve its computation efficiency, an analytical solution is derived for a fast evaluation of the Green function that needs to be implemented thousands of times. In addition, the OpenMP parallelization technique is applied to the formation of the influence coefficient matrix, significantly reducing the running CPU time. Computations are performed on wave-exciting forces and hydrodynamic coefficients for the long cylindrical structures, either floating or submerged. Comparison with other numerical and analytical methods demonstrates a good performance of the present method.

In this study, we present several image segmentation techniques for various image scales and modalities. We consider cellular-, organ-, and whole organism-levels of biological structures in cardiovascular applications. Several automatic segmentation techniques are presented and discussed in this work. The overall pipeline for reconstruction of biological structures consists of the following steps: image pre-processing, feature detection, initial mask generation, mask processing, and segmentation post-processing. Several examples of image segmentation are presented, including patient-specific abdominal tissues segmentation, vascular network identification and myocyte lipid droplet micro-structure reconstruction.

]]>Image Segmentation for Cardiovascular Biomedical Applications at Different ScalesAlexander DanilovRoman PryamonosovAlexandra Yurovadoi: 10.3390/computation4030035Computation2016-09-15Computation2016-09-1543Article3510.3390/computation4030035http://www.mdpi.com/2079-3197/4/3/35Computation, Vol. 4, Pages 34: Towards TDDFT for Strongly Correlated Materialshttp://www.mdpi.com/2079-3197/4/3/34
We present some details of our recently-proposed Time-Dependent Density-Functional Theory (TDDFT) for strongly-correlated materials in which the exchange-correlation (XC) kernel is derived from the charge susceptibility obtained using Dynamical Mean-Field Theory (the TDDFT + DMFT approach). We proceed with deriving the expression for the XC kernel for the one-band Hubbard model by solving DMFT equations via two approaches, the Hirsch–Fye Quantum Monte Carlo (HF-QMC) and an approximate low-cost perturbation theory approach, and demonstrate that the latter gives results that are comparable to the exact HF-QMC solution. Furthermore, through a variety of applications, we propose a simple analytical formula for the XC kernel. Additionally, we use the exact and approximate kernels to examine the nonhomogeneous ultrafast response of two systems: a one-band Hubbard model and a Mott insulator YTiO3. We show that the frequency dependence of the kernel, i.e., memory effects, is important for dynamics at the femtosecond timescale. We also conclude that strong correlations lead to the presence of beats in the time-dependent electric conductivity in YTiO3, a feature that could be tested experimentally and that could help validate the few approximations used in our formulation. We conclude by proposing an algorithm for the generalization of the theory to non-linear response.Computation, Vol. 4, Pages 34: Towards TDDFT for Strongly Correlated Materials

We present some details of our recently-proposed Time-Dependent Density-Functional Theory (TDDFT) for strongly-correlated materials in which the exchange-correlation (XC) kernel is derived from the charge susceptibility obtained using Dynamical Mean-Field Theory (the TDDFT + DMFT approach). We proceed with deriving the expression for the XC kernel for the one-band Hubbard model by solving DMFT equations via two approaches, the Hirsch–Fye Quantum Monte Carlo (HF-QMC) and an approximate low-cost perturbation theory approach, and demonstrate that the latter gives results that are comparable to the exact HF-QMC solution. Furthermore, through a variety of applications, we propose a simple analytical formula for the XC kernel. Additionally, we use the exact and approximate kernels to examine the nonhomogeneous ultrafast response of two systems: a one-band Hubbard model and a Mott insulator YTiO3. We show that the frequency dependence of the kernel, i.e., memory effects, is important for dynamics at the femtosecond timescale. We also conclude that strong correlations lead to the presence of beats in the time-dependent electric conductivity in YTiO3, a feature that could be tested experimentally and that could help validate the few approximations used in our formulation. We conclude by proposing an algorithm for the generalization of the theory to non-linear response.

]]>Towards TDDFT for Strongly Correlated MaterialsShree AcharyaVolodymyr TurkowskiTalat Rahmandoi: 10.3390/computation4030034Computation2016-09-10Computation2016-09-1043Article3410.3390/computation4030034http://www.mdpi.com/2079-3197/4/3/34Computation, Vol. 4, Pages 33: The Influence of One-Electron Self-Interaction on d-Electronshttp://www.mdpi.com/2079-3197/4/3/33
We investigate four diatomic molecules containing transition metals using two variants of hybrid functionals. We compare global hybrid functionals that only partially counteract self-interaction to local hybrid functionals that are designed to be formally free from one-electron self-interaction. As d-orbitals are prone to be particularly strongly influenced by self-interaction errors, one may have expected that self-interaction-free local hybrid functionals lead to a qualitatively different Kohn–Sham density of states than global hybrid functionals. Yet, we find that both types of hybrids lead to a very similar density of states. For both global and local hybrids alike, the intrinsic amount of exact exchange plays the dominant role in counteracting electronic self-interaction, whereas being formally free from one-electron self-interaction seems to be of lesser importance.Computation, Vol. 4, Pages 33: The Influence of One-Electron Self-Interaction on d-Electrons

We investigate four diatomic molecules containing transition metals using two variants of hybrid functionals. We compare global hybrid functionals that only partially counteract self-interaction to local hybrid functionals that are designed to be formally free from one-electron self-interaction. As d-orbitals are prone to be particularly strongly influenced by self-interaction errors, one may have expected that self-interaction-free local hybrid functionals lead to a qualitatively different Kohn–Sham density of states than global hybrid functionals. Yet, we find that both types of hybrids lead to a very similar density of states. For both global and local hybrids alike, the intrinsic amount of exact exchange plays the dominant role in counteracting electronic self-interaction, whereas being formally free from one-electron self-interaction seems to be of lesser importance.

]]>The Influence of One-Electron Self-Interaction on d-ElectronsTobias SchmidtStephan Kümmeldoi: 10.3390/computation4030033Computation2016-09-06Computation2016-09-0643Article3310.3390/computation4030033http://www.mdpi.com/2079-3197/4/3/33Computation, Vol. 4, Pages 32: Calculation of the Acoustic Spectrum of a Cylindrical Vortex in Viscous Heat-Conducting Gas Based on the Navier–Stokes Equationshttp://www.mdpi.com/2079-3197/4/3/32
An extremely interesting problem in aero-hydrodynamics is the sound radiation of a single vortical structure. Currently, this type of problem is mainly considered for an incompressible medium. In this paper a method was developed to take into account the viscosity and thermal conductivity of gas. The acoustic radiation frequency of a cylindrical vortex on a flat wall in viscous heat-conducting gas (air) has been investigated. The problem is solved on the basis of the Navier–Stokes equations using the small initial vorticity approach. The power expansion of unknown functions in a series with a small parameter (vorticity) is used. It is shown that there are high-frequency oscillations modulated by a low-frequency signal. The value of the high frequency remains constant for a long period of time. Thus the high frequency can be considered a natural frequency of the vortex radiation. The value of the natural frequency depends only on the initial radius of the cylindrical vortex, and does not depend on the intensity of the initial vorticity. As expected from physical considerations, the natural frequency decreases exponentially as the initial radius of the cylinder increases. Furthermore, the natural frequency differs from that of the oscillations inside the initial cylinder and in the outer domain. The results of the paper may be of interest for aeroacoustics and tornado modeling.Computation, Vol. 4, Pages 32: Calculation of the Acoustic Spectrum of a Cylindrical Vortex in Viscous Heat-Conducting Gas Based on the Navier–Stokes Equations

An extremely interesting problem in aero-hydrodynamics is the sound radiation of a single vortical structure. Currently, this type of problem is mainly considered for an incompressible medium. In this paper a method was developed to take into account the viscosity and thermal conductivity of gas. The acoustic radiation frequency of a cylindrical vortex on a flat wall in viscous heat-conducting gas (air) has been investigated. The problem is solved on the basis of the Navier–Stokes equations using the small initial vorticity approach. The power expansion of unknown functions in a series with a small parameter (vorticity) is used. It is shown that there are high-frequency oscillations modulated by a low-frequency signal. The value of the high frequency remains constant for a long period of time. Thus the high frequency can be considered a natural frequency of the vortex radiation. The value of the natural frequency depends only on the initial radius of the cylindrical vortex, and does not depend on the intensity of the initial vorticity. As expected from physical considerations, the natural frequency decreases exponentially as the initial radius of the cylinder increases. Furthermore, the natural frequency differs from that of the oscillations inside the initial cylinder and in the outer domain. The results of the paper may be of interest for aeroacoustics and tornado modeling.

]]>Calculation of the Acoustic Spectrum of a Cylindrical Vortex in Viscous Heat-Conducting Gas Based on the Navier–Stokes EquationsTatiana PetrovaFedor Shugaevdoi: 10.3390/computation4030032Computation2016-08-20Computation2016-08-2043Article3210.3390/computation4030032http://www.mdpi.com/2079-3197/4/3/32Computation, Vol. 4, Pages 31: Computational Analysis of Natural Ventilation Flows in Geodesic Dome Building in Hot Climateshttp://www.mdpi.com/2079-3197/4/3/31
For centuries, dome roofs were used in traditional houses in hot regions such as the Middle East and Mediterranean basin due to its thermal advantages, structural benefits and availability of construction materials. This article presents the computational modelling of the wind- and buoyancy-induced ventilation in a geodesic dome building in a hot climate. The airflow and temperature distributions and ventilation flow rates were predicted using Computational Fluid Dynamics (CFD). The three-dimensional Reynolds-Averaged Navier-Stokes (RANS) equations were solved using the CFD tool ANSYS FLUENT15. The standard k-epsilon was used as turbulence model. The modelling was verified using grid sensitivity and flux balance analysis. In order to validate the modelling method used in the current study, additional simulation of a similar domed-roof building was conducted for comparison. For wind-induced ventilation, the dome building was modelled with upper roof vents. For buoyancy-induced ventilation, the geometry was modelled with roof vents and also with two windows open in the lower level. The results showed that using the upper roof openings as a natural ventilation strategy during winter periods is advantageous and could reduce the indoor temperature and also introduce fresh air. The results also revealed that natural ventilation using roof vents cannot satisfy thermal requirements during hot summer periods and complementary cooling solutions should be considered. The analysis showed that buoyancy-induced ventilation model can still generate air movement inside the building during periods with no or very low wind.Computation, Vol. 4, Pages 31: Computational Analysis of Natural Ventilation Flows in Geodesic Dome Building in Hot Climates

For centuries, dome roofs were used in traditional houses in hot regions such as the Middle East and Mediterranean basin due to its thermal advantages, structural benefits and availability of construction materials. This article presents the computational modelling of the wind- and buoyancy-induced ventilation in a geodesic dome building in a hot climate. The airflow and temperature distributions and ventilation flow rates were predicted using Computational Fluid Dynamics (CFD). The three-dimensional Reynolds-Averaged Navier-Stokes (RANS) equations were solved using the CFD tool ANSYS FLUENT15. The standard k-epsilon was used as turbulence model. The modelling was verified using grid sensitivity and flux balance analysis. In order to validate the modelling method used in the current study, additional simulation of a similar domed-roof building was conducted for comparison. For wind-induced ventilation, the dome building was modelled with upper roof vents. For buoyancy-induced ventilation, the geometry was modelled with roof vents and also with two windows open in the lower level. The results showed that using the upper roof openings as a natural ventilation strategy during winter periods is advantageous and could reduce the indoor temperature and also introduce fresh air. The results also revealed that natural ventilation using roof vents cannot satisfy thermal requirements during hot summer periods and complementary cooling solutions should be considered. The analysis showed that buoyancy-induced ventilation model can still generate air movement inside the building during periods with no or very low wind.

]]>Computational Analysis of Natural Ventilation Flows in Geodesic Dome Building in Hot ClimatesZohreh SoleimaniJohn CalautitBen Hughesdoi: 10.3390/computation4030031Computation2016-08-17Computation2016-08-1743Article3110.3390/computation4030031http://www.mdpi.com/2079-3197/4/3/31Computation, Vol. 4, Pages 30: Electron Correlations in Local Effective Potential Theoryhttp://www.mdpi.com/2079-3197/4/3/30
Local effective potential theory, both stationary-state and time-dependent, constitutes the mapping from a system of electrons in an external field to one of the noninteracting fermions possessing the same basic variable such as the density, thereby enabling the determination of the energy and other properties of the electronic system. This paper is a description via Quantal Density Functional Theory (QDFT) of the electron correlations that must be accounted for in such a mapping. It is proved through QDFT that independent of the form of external field, (a) it is possible to map to a model system possessing all the basic variables; and that (b) with the requirement that the model fermions are subject to the same external fields, the only correlations that must be considered are those due to the Pauli exclusion principle, Coulomb repulsion, and Correlation–Kinetic effects. The cases of both a static and time-dependent electromagnetic field, for which the basic variables are the density and physical current density, are considered. The examples of solely an external electrostatic or time-dependent electric field constitute special cases. An efficacious unification in terms of electron correlations, independent of the type of external field, is thereby achieved. The mapping is explicated for the example of a quantum dot in a magnetostatic field, and for a quantum dot in a magnetostatic and time-dependent electric field.Computation, Vol. 4, Pages 30: Electron Correlations in Local Effective Potential Theory

Local effective potential theory, both stationary-state and time-dependent, constitutes the mapping from a system of electrons in an external field to one of the noninteracting fermions possessing the same basic variable such as the density, thereby enabling the determination of the energy and other properties of the electronic system. This paper is a description via Quantal Density Functional Theory (QDFT) of the electron correlations that must be accounted for in such a mapping. It is proved through QDFT that independent of the form of external field, (a) it is possible to map to a model system possessing all the basic variables; and that (b) with the requirement that the model fermions are subject to the same external fields, the only correlations that must be considered are those due to the Pauli exclusion principle, Coulomb repulsion, and Correlation–Kinetic effects. The cases of both a static and time-dependent electromagnetic field, for which the basic variables are the density and physical current density, are considered. The examples of solely an external electrostatic or time-dependent electric field constitute special cases. An efficacious unification in terms of electron correlations, independent of the type of external field, is thereby achieved. The mapping is explicated for the example of a quantum dot in a magnetostatic field, and for a quantum dot in a magnetostatic and time-dependent electric field.

]]>Electron Correlations in Local Effective Potential TheoryViraht SahniXiao-Yin PanTao Yangdoi: 10.3390/computation4030030Computation2016-08-16Computation2016-08-1643Article3010.3390/computation4030030http://www.mdpi.com/2079-3197/4/3/30Computation, Vol. 4, Pages 29: DiamondTorre Algorithm for High-Performance Wave Modelinghttp://www.mdpi.com/2079-3197/4/3/29
Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit) memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.Computation, Vol. 4, Pages 29: DiamondTorre Algorithm for High-Performance Wave Modeling

Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit) memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

]]>DiamondTorre Algorithm for High-Performance Wave ModelingVadim LevchenkoAnastasia PerepelkinaAndrey Zakirovdoi: 10.3390/computation4030029Computation2016-08-12Computation2016-08-1243Article2910.3390/computation4030029http://www.mdpi.com/2079-3197/4/3/29Computation, Vol. 4, Pages 28: Highly Excited States from a Time Independent Density Functional Methodhttp://www.mdpi.com/2079-3197/4/3/28
A constrained optimized effective potential (COEP) methodology proposed earlier by us for singly low-lying excited states is extended to highly excited states having the same spatial and spin symmetry. Basic tenets of time independent density functional theory and its COEP implementation for excited states are briefly reviewed. The amended Kohn–Sham-like equations for excited state orbitals and their specific features for highly excited states are discussed. The accuracy of the method is demonstrated using exchange-only calculations for highly excited states of the He and Li atoms.Computation, Vol. 4, Pages 28: Highly Excited States from a Time Independent Density Functional Method

A constrained optimized effective potential (COEP) methodology proposed earlier by us for singly low-lying excited states is extended to highly excited states having the same spatial and spin symmetry. Basic tenets of time independent density functional theory and its COEP implementation for excited states are briefly reviewed. The amended Kohn–Sham-like equations for excited state orbitals and their specific features for highly excited states are discussed. The accuracy of the method is demonstrated using exchange-only calculations for highly excited states of the He and Li atoms.

]]>Highly Excited States from a Time Independent Density Functional MethodVitaly GlushkovMel Levydoi: 10.3390/computation4030028Computation2016-08-05Computation2016-08-0543Article2810.3390/computation4030028http://www.mdpi.com/2079-3197/4/3/28Computation, Vol. 4, Pages 27: Automatic Generation of Massively Parallel Codes from ExaSlanghttp://www.mdpi.com/2079-3197/4/3/27
Domain-specific languages (DSLs) have the potential to provide an intuitive interface for specifying problems and solutions for domain experts. Based on this, code generation frameworks can produce compilable source code. However, apart from optimizing execution performance, parallelization is key for pushing the limits in problem size and an essential ingredient for exascale performance. We discuss necessary concepts for the introduction of such capabilities in code generators. In particular, those for partitioning the problem to be solved and accessing the partitioned data are elaborated. Furthermore, possible approaches to expose parallelism to users through a given DSL are discussed. Moreover, we present the implementation of these concepts in the ExaStencils framework. In its scope, a code generation framework for highly optimized and massively parallel geometric multigrid solvers is developed. It uses specifications from its multi-layered external DSL ExaSlang as input. Based on a general version for generating parallel code, we develop and implement widely applicable extensions and optimizations. Finally, a performance study of generated applications is conducted on the JuQueen supercomputer.Computation, Vol. 4, Pages 27: Automatic Generation of Massively Parallel Codes from ExaSlang

Domain-specific languages (DSLs) have the potential to provide an intuitive interface for specifying problems and solutions for domain experts. Based on this, code generation frameworks can produce compilable source code. However, apart from optimizing execution performance, parallelization is key for pushing the limits in problem size and an essential ingredient for exascale performance. We discuss necessary concepts for the introduction of such capabilities in code generators. In particular, those for partitioning the problem to be solved and accessing the partitioned data are elaborated. Furthermore, possible approaches to expose parallelism to users through a given DSL are discussed. Moreover, we present the implementation of these concepts in the ExaStencils framework. In its scope, a code generation framework for highly optimized and massively parallel geometric multigrid solvers is developed. It uses specifications from its multi-layered external DSL ExaSlang as input. Based on a general version for generating parallel code, we develop and implement widely applicable extensions and optimizations. Finally, a performance study of generated applications is conducted on the JuQueen supercomputer.

]]>Automatic Generation of Massively Parallel Codes from ExaSlangSebastian KuckukHarald Köstlerdoi: 10.3390/computation4030027Computation2016-08-04Computation2016-08-0443Article2710.3390/computation4030027http://www.mdpi.com/2079-3197/4/3/27Computation, Vol. 4, Pages 26: Interaction of Hydrogen with Au Modified by Pd and Rh in View of Electrochemical Applicationshttp://www.mdpi.com/2079-3197/4/3/26
Hydrogen interaction with bimetallic Au(Pd) and Au(Rh) systems are studied with the density functional theory (DFT)-based periodic approach. Several bimetallic configurations with varying concentrations of Pd and Rh atoms in the under layer of a gold surface(111) were considered. The reactivity of the doped Au(111) toward hydrogen adsorption and absorption was related to the property modifications induced by the presence of metal dopants. DFT-computed quantities, such as the energy stability, the inter-atomic and inter-slab binding energies between gold and dopants, and the charge density were used to infer the similarities and differences between both Pd and Rh dopants in these model alloys. The hydrogen penetration into the surface is favored in the bimetallic slab configurations. The underlayer dopants affect the reactivity of the surface gold toward hydrogen adsorption in the systems with a dopant underlayer, covered by absorbed hydrogen up to a monolayer. This indicates a possibility to tune the gold surface properties of bimetallic electrodes by modulating the degree of hydrogen coverage of the inner dopant layer(s).Computation, Vol. 4, Pages 26: Interaction of Hydrogen with Au Modified by Pd and Rh in View of Electrochemical Applications

Hydrogen interaction with bimetallic Au(Pd) and Au(Rh) systems are studied with the density functional theory (DFT)-based periodic approach. Several bimetallic configurations with varying concentrations of Pd and Rh atoms in the under layer of a gold surface(111) were considered. The reactivity of the doped Au(111) toward hydrogen adsorption and absorption was related to the property modifications induced by the presence of metal dopants. DFT-computed quantities, such as the energy stability, the inter-atomic and inter-slab binding energies between gold and dopants, and the charge density were used to infer the similarities and differences between both Pd and Rh dopants in these model alloys. The hydrogen penetration into the surface is favored in the bimetallic slab configurations. The underlayer dopants affect the reactivity of the surface gold toward hydrogen adsorption in the systems with a dopant underlayer, covered by absorbed hydrogen up to a monolayer. This indicates a possibility to tune the gold surface properties of bimetallic electrodes by modulating the degree of hydrogen coverage of the inner dopant layer(s).

]]>Interaction of Hydrogen with Au Modified by Pd and Rh in View of Electrochemical ApplicationsFernanda JuarezGerman SoldanoElizabeth SantosHazar GuesmiFrederik TielensTzonka Minevadoi: 10.3390/computation4030026Computation2016-07-20Computation2016-07-2043Article2610.3390/computation4030026http://www.mdpi.com/2079-3197/4/3/26Computation, Vol. 4, Pages 25: Predictions of Physicochemical Properties of Ionic Liquids with DFThttp://www.mdpi.com/2079-3197/4/3/25
Nowadays, density functional theory (DFT)-based high-throughput computational approach is becoming more efficient and, thus, attractive for finding advanced materials for electrochemical applications. In this work, we illustrate how theoretical models, computational methods, and informatics techniques can be put together to form a simple DFT-based throughput computational workflow for predicting physicochemical properties of room-temperature ionic liquids. The developed workflow has been used for screening a set of 48 ionic pairs and for analyzing the gathered data. The predicted relative electrochemical stabilities, ionic charges and dynamic properties of the investigated ionic liquids are discussed in the light of their potential practical applications.Computation, Vol. 4, Pages 25: Predictions of Physicochemical Properties of Ionic Liquids with DFT

Nowadays, density functional theory (DFT)-based high-throughput computational approach is becoming more efficient and, thus, attractive for finding advanced materials for electrochemical applications. In this work, we illustrate how theoretical models, computational methods, and informatics techniques can be put together to form a simple DFT-based throughput computational workflow for predicting physicochemical properties of room-temperature ionic liquids. The developed workflow has been used for screening a set of 48 ionic pairs and for analyzing the gathered data. The predicted relative electrochemical stabilities, ionic charges and dynamic properties of the investigated ionic liquids are discussed in the light of their potential practical applications.

]]>Predictions of Physicochemical Properties of Ionic Liquids with DFTKarl KaruAnton RuzanovHeigo ErsVladislav IvaništševIsabel Lage-EstebanezJosé García de la Vegadoi: 10.3390/computation4030025Computation2016-07-19Computation2016-07-1943Article2510.3390/computation4030025http://www.mdpi.com/2079-3197/4/3/25Computation, Vol. 4, Pages 23: Orbital Energy-Based Reaction Analysis of SN2 Reactionshttp://www.mdpi.com/2079-3197/4/3/23
An orbital energy-based reaction analysis theory is presented as an extension of the orbital-based conceptual density functional theory. In the orbital energy-based theory, the orbitals contributing to reactions are interpreted to be valence orbitals giving the largest orbital energy variation from reactants to products. Reactions are taken to be electron transfer-driven when they provide small variations for the gaps between the contributing occupied and unoccupied orbital energies on the intrinsic reaction coordinates in the initial processes. The orbital energy-based theory is then applied to the calculations of several S N2 reactions. Using a reaction path search method, the Cl− + CH3I → ClCH3 + I− reaction, for which another reaction path called “roundabout path” is proposed, is found to have a precursor process similar to the roundabout path just before this SN2 reaction process. The orbital energy-based theory indicates that this precursor process is obviously driven by structural change, while the successor SN2 reaction proceeds through electron transfer between the contributing orbitals. Comparing the calculated results of the SN2 reactions in gas phase and in aqueous solution shows that the contributing orbitals significantly depend on solvent effects and these orbitals can be correctly determined by this theory.Computation, Vol. 4, Pages 23: Orbital Energy-Based Reaction Analysis of SN2 Reactions

An orbital energy-based reaction analysis theory is presented as an extension of the orbital-based conceptual density functional theory. In the orbital energy-based theory, the orbitals contributing to reactions are interpreted to be valence orbitals giving the largest orbital energy variation from reactants to products. Reactions are taken to be electron transfer-driven when they provide small variations for the gaps between the contributing occupied and unoccupied orbital energies on the intrinsic reaction coordinates in the initial processes. The orbital energy-based theory is then applied to the calculations of several S N2 reactions. Using a reaction path search method, the Cl− + CH3I → ClCH3 + I− reaction, for which another reaction path called “roundabout path” is proposed, is found to have a precursor process similar to the roundabout path just before this SN2 reaction process. The orbital energy-based theory indicates that this precursor process is obviously driven by structural change, while the successor SN2 reaction proceeds through electron transfer between the contributing orbitals. Comparing the calculated results of the SN2 reactions in gas phase and in aqueous solution shows that the contributing orbitals significantly depend on solvent effects and these orbitals can be correctly determined by this theory.

]]>Orbital Energy-Based Reaction Analysis of SN2 ReactionsTakao TsunedaSatoshi MaedaYu HarabuchiRaman Singhdoi: 10.3390/computation4030023Computation2016-07-08Computation2016-07-0843Article2310.3390/computation4030023http://www.mdpi.com/2079-3197/4/3/23Computation, Vol. 4, Pages 24: On the v-Representabilty Problem in Density Functional Theory: Application to Non-Interacting Systemshttp://www.mdpi.com/2079-3197/4/3/24
Based on a computational procedure for determining the functional derivative with respect to the density of any antisymmetric N-particle wave function for a non-interacting system that leads to the density, we devise a test as to whether or not a wave function known to lead to a given density corresponds to a solution of a Schrödinger equation for some potential. We examine explicitly the case of non-interacting systems described by Slater determinants. Numerical examples for the cases of a one-dimensional square-well potential with infinite walls and the harmonic oscillator potential illustrate the formalism.Computation, Vol. 4, Pages 24: On the v-Representabilty Problem in Density Functional Theory: Application to Non-Interacting Systems

Based on a computational procedure for determining the functional derivative with respect to the density of any antisymmetric N-particle wave function for a non-interacting system that leads to the density, we devise a test as to whether or not a wave function known to lead to a given density corresponds to a solution of a Schrödinger equation for some potential. We examine explicitly the case of non-interacting systems described by Slater determinants. Numerical examples for the cases of a one-dimensional square-well potential with infinite walls and the harmonic oscillator potential illustrate the formalism.

]]>On the v-Representabilty Problem in Density Functional Theory: Application to Non-Interacting SystemsMarkus DäneAntonios Gonisdoi: 10.3390/computation4030024Computation2016-07-05Computation2016-07-0543Article2410.3390/computation4030024http://www.mdpi.com/2079-3197/4/3/24Computation, Vol. 4, Pages 22: Online Adaptive Local-Global Model Reduction for Flows in Heterogeneous Porous Mediahttp://www.mdpi.com/2079-3197/4/2/22
We propose an online adaptive local-global POD-DEIM model reduction method for flows in heterogeneous porous media. The main idea of the proposed method is to use local online indicators to decide on the global update, which is performed via reduced cost local multiscale basis functions. This unique local-global online combination allows (1) developing local indicators that are used for both local and global updates (2) computing global online modes via local multiscale basis functions. The multiscale basis functions consist of offline and some online local basis functions. The approach used for constructing a global reduced system is based on Proper Orthogonal Decomposition (POD) Galerkin projection. The nonlinearities are approximated by the Discrete Empirical Interpolation Method (DEIM). The online adaption is performed by incorporating new data, which become available at the online stage. Once the criterion for updates is satisfied, we adapt the reduced system online by changing the POD subspace and the DEIM approximation of the nonlinear functions. The main contribution of the paper is that the criterion for adaption and the construction of the global online modes are based on local error indicators and local multiscale basis function which can be cheaply computed. Since the adaption is performed infrequently, the new methodology does not add significant computational overhead associated with when and how to adapt the reduced basis. Our approach is particularly useful for situations where it is desired to solve the reduced system for inputs or controls that result in a solution outside the span of the snapshots generated in the offline stage. Our method also offers an alternative of constructing a robust reduced system even if a potential initial poor choice of snapshots is used. Applications to single-phase and two-phase flow problems demonstrate the efficiency of our method.Computation, Vol. 4, Pages 22: Online Adaptive Local-Global Model Reduction for Flows in Heterogeneous Porous Media

We propose an online adaptive local-global POD-DEIM model reduction method for flows in heterogeneous porous media. The main idea of the proposed method is to use local online indicators to decide on the global update, which is performed via reduced cost local multiscale basis functions. This unique local-global online combination allows (1) developing local indicators that are used for both local and global updates (2) computing global online modes via local multiscale basis functions. The multiscale basis functions consist of offline and some online local basis functions. The approach used for constructing a global reduced system is based on Proper Orthogonal Decomposition (POD) Galerkin projection. The nonlinearities are approximated by the Discrete Empirical Interpolation Method (DEIM). The online adaption is performed by incorporating new data, which become available at the online stage. Once the criterion for updates is satisfied, we adapt the reduced system online by changing the POD subspace and the DEIM approximation of the nonlinear functions. The main contribution of the paper is that the criterion for adaption and the construction of the global online modes are based on local error indicators and local multiscale basis function which can be cheaply computed. Since the adaption is performed infrequently, the new methodology does not add significant computational overhead associated with when and how to adapt the reduced basis. Our approach is particularly useful for situations where it is desired to solve the reduced system for inputs or controls that result in a solution outside the span of the snapshots generated in the offline stage. Our method also offers an alternative of constructing a robust reduced system even if a potential initial poor choice of snapshots is used. Applications to single-phase and two-phase flow problems demonstrate the efficiency of our method.

]]>Online Adaptive Local-Global Model Reduction for Flows in Heterogeneous Porous MediaYalchin EfendievEduardo GildinYanfang Yangdoi: 10.3390/computation4020022Computation2016-06-07Computation2016-06-0742Article2210.3390/computation4020022http://www.mdpi.com/2079-3197/4/2/22Computation, Vol. 4, Pages 21: Pore-Network Modeling of Water and Vapor Transport in the Micro Porous Layer and Gas Diffusion Layer of a Polymer Electrolyte Fuel Cellhttp://www.mdpi.com/2079-3197/4/2/21
In the cathode side of a polymer electrolyte fuel cell (PEFC), a micro porous layer (MPL) added between the catalyst layer (CL) and the gas diffusion layer (GDL) plays an important role in water management. In this work, by using both quasi-static and dynamic pore-network models, water and vapor transport in the MPL and GDL has been investigated. We illustrated how the MPL improved water management in the cathode. Furthermore, it was found that dynamic liquid water transport in the GDL was very sensitive to the built-up thermal gradient along the through-plane direction. Thus, we may control water vapor condensation only along GDL-land interfaces by properly adjusting the GDL thermal conductivity. Our numerical results can provide guidelines for optimizing GDL pore structures for good water management.Computation, Vol. 4, Pages 21: Pore-Network Modeling of Water and Vapor Transport in the Micro Porous Layer and Gas Diffusion Layer of a Polymer Electrolyte Fuel Cell

In the cathode side of a polymer electrolyte fuel cell (PEFC), a micro porous layer (MPL) added between the catalyst layer (CL) and the gas diffusion layer (GDL) plays an important role in water management. In this work, by using both quasi-static and dynamic pore-network models, water and vapor transport in the MPL and GDL has been investigated. We illustrated how the MPL improved water management in the cathode. Furthermore, it was found that dynamic liquid water transport in the GDL was very sensitive to the built-up thermal gradient along the through-plane direction. Thus, we may control water vapor condensation only along GDL-land interfaces by properly adjusting the GDL thermal conductivity. Our numerical results can provide guidelines for optimizing GDL pore structures for good water management.

]]>Pore-Network Modeling of Water and Vapor Transport in the Micro Porous Layer and Gas Diffusion Layer of a Polymer Electrolyte Fuel CellChao-Zhong QinS. HassanizadehLucas Van Oosterhoutdoi: 10.3390/computation4020021Computation2016-05-30Computation2016-05-3042Article2110.3390/computation4020021http://www.mdpi.com/2079-3197/4/2/21Computation, Vol. 4, Pages 20: On the Use of Benchmarks for Multiple Propertieshttp://www.mdpi.com/2079-3197/4/2/20
Benchmark calculations provide a large amount of information that can be very useful in assessing the performance of density functional approximations, and for choosing the one to use. In order to condense the information some indicators are provided. However, these indicators might be insufficient and a more careful analysis is needed, as shown by some examples from an existing data set for cubic crystals.Computation, Vol. 4, Pages 20: On the Use of Benchmarks for Multiple Properties

Benchmark calculations provide a large amount of information that can be very useful in assessing the performance of density functional approximations, and for choosing the one to use. In order to condense the information some indicators are provided. However, these indicators might be insufficient and a more careful analysis is needed, as shown by some examples from an existing data set for cubic crystals.

]]>On the Use of Benchmarks for Multiple PropertiesBartolomeo CivalleriRoberto DovesiPascal PernotDavide PrestiAndreas Savindoi: 10.3390/computation4020020Computation2016-04-30Computation2016-04-3042Article2010.3390/computation4020020http://www.mdpi.com/2079-3197/4/2/20Computation, Vol. 4, Pages 19: Kinetic and Exchange Energy Densities near the Nucleushttp://www.mdpi.com/2079-3197/4/2/19
We investigate the behavior of the kinetic and the exchange energy densities near the nuclear cusp of atomic systems. Considering hydrogenic orbitals, we derive analytical expressions near the nucleus, for single shells, as well as in the semiclassical limit of large non-relativistic neutral atoms. We show that a model based on the helium iso-electronic series is very accurate, as also confirmed by numerical calculations on real atoms up to two thousands electrons. Based on this model, we propose non-local density-dependent ingredients that are suitable for the description of the kinetic and exchange energy densities in the region close to the nucleus. These non-local ingredients are invariant under the uniform scaling of the density, and they can be used in the construction of non-local exchange-correlation and kinetic functionals.Computation, Vol. 4, Pages 19: Kinetic and Exchange Energy Densities near the Nucleus

We investigate the behavior of the kinetic and the exchange energy densities near the nuclear cusp of atomic systems. Considering hydrogenic orbitals, we derive analytical expressions near the nucleus, for single shells, as well as in the semiclassical limit of large non-relativistic neutral atoms. We show that a model based on the helium iso-electronic series is very accurate, as also confirmed by numerical calculations on real atoms up to two thousands electrons. Based on this model, we propose non-local density-dependent ingredients that are suitable for the description of the kinetic and exchange energy densities in the region close to the nucleus. These non-local ingredients are invariant under the uniform scaling of the density, and they can be used in the construction of non-local exchange-correlation and kinetic functionals.

]]>Kinetic and Exchange Energy Densities near the NucleusLucian ConstantinEduardo FabianoFabio Della Saladoi: 10.3390/computation4020019Computation2016-04-02Computation2016-04-0242Article1910.3390/computation4020019http://www.mdpi.com/2079-3197/4/2/19Computation, Vol. 4, Pages 18: Grand Canonical Monte Carlo Simulation of Nitrogen Adsorption in a Silica Aerogel Modelhttp://www.mdpi.com/2079-3197/4/2/18
In this paper, the Diffusion Limited Cluster Aggregation (DLCA) method is employed to reconstruct the three-dimensional network of silica aerogel. Then, simulation of nitrogen adsorption at 77 K in silica aerogel is conducted by the Grand Canonical Monte Carlo (GCMC) method. To reduce the computational cost and guarantee accuracy, a continuous-discrete hybrid potential model, as well as an adsorbed layer thickness estimation method, is employed. Four different structures are generated to investigate impacts of specific surface area and porosity on adsorptive capacity. Good agreement with experimental results is found over a wide range of relative pressures, which proves the validity of the model. Specific surface area and porosity mainly affect nitrogen uptake under low pressure and high pressure, respectively.Computation, Vol. 4, Pages 18: Grand Canonical Monte Carlo Simulation of Nitrogen Adsorption in a Silica Aerogel Model

In this paper, the Diffusion Limited Cluster Aggregation (DLCA) method is employed to reconstruct the three-dimensional network of silica aerogel. Then, simulation of nitrogen adsorption at 77 K in silica aerogel is conducted by the Grand Canonical Monte Carlo (GCMC) method. To reduce the computational cost and guarantee accuracy, a continuous-discrete hybrid potential model, as well as an adsorbed layer thickness estimation method, is employed. Four different structures are generated to investigate impacts of specific surface area and porosity on adsorptive capacity. Good agreement with experimental results is found over a wide range of relative pressures, which proves the validity of the model. Specific surface area and porosity mainly affect nitrogen uptake under low pressure and high pressure, respectively.

]]>Grand Canonical Monte Carlo Simulation of Nitrogen Adsorption in a Silica Aerogel ModelWen-Li XieZheng-Ji ChenZeng LiWen-Quan Taodoi: 10.3390/computation4020018Computation2016-04-01Computation2016-04-0142Article1810.3390/computation4020018http://www.mdpi.com/2079-3197/4/2/18Computation, Vol. 4, Pages 17: Analytical Results on the Behavior of a Liquid Junction across a Porous Diaphragm or a Charged Porous Membrane between Two Solutions According to the Nernst–Planck Equationhttp://www.mdpi.com/2079-3197/4/2/17
We model the behavior of an ideal liquid junction, across a porous and possibly charged medium between two ion-containing solutions, by means of the Nernst–Planck equation for the stationary state, in conditions of local electroneutrality. An analytical solution of the equation was found long ago by Planck for the uncharged junction with only ions of valences +1 and −1. Other analytical results, which have later been obtained also for more general situations, seem impractical for performing calculations. In this paper, we obtain analytical solutions for systems with up to three valence classes, which can be applied to perform numerical calculations in a straightforward way. Our method provides a much larger amount of information on the behavior of the system than the well-known Henderson’s approximation. At the same time, it is more simple and reliable, and much less demanding in terms of computational effort, than the nowadays commonly employed numerical methods, typically based on discrete integration and trial-and-error numerical inversions. We present some examples of practical applications of our results. We study in particular the uphill transport (i.e., the transport from the lower-concentration to the higher-concentration region) of a divalent cation in a liquid junction containing also other univalent anions and cations.Computation, Vol. 4, Pages 17: Analytical Results on the Behavior of a Liquid Junction across a Porous Diaphragm or a Charged Porous Membrane between Two Solutions According to the Nernst–Planck Equation

We model the behavior of an ideal liquid junction, across a porous and possibly charged medium between two ion-containing solutions, by means of the Nernst–Planck equation for the stationary state, in conditions of local electroneutrality. An analytical solution of the equation was found long ago by Planck for the uncharged junction with only ions of valences +1 and −1. Other analytical results, which have later been obtained also for more general situations, seem impractical for performing calculations. In this paper, we obtain analytical solutions for systems with up to three valence classes, which can be applied to perform numerical calculations in a straightforward way. Our method provides a much larger amount of information on the behavior of the system than the well-known Henderson’s approximation. At the same time, it is more simple and reliable, and much less demanding in terms of computational effort, than the nowadays commonly employed numerical methods, typically based on discrete integration and trial-and-error numerical inversions. We present some examples of practical applications of our results. We study in particular the uphill transport (i.e., the transport from the lower-concentration to the higher-concentration region) of a divalent cation in a liquid junction containing also other univalent anions and cations.

]]>Analytical Results on the Behavior of a Liquid Junction across a Porous Diaphragm or a Charged Porous Membrane between Two Solutions According to the Nernst–Planck EquationMassimo MarinoDoriano Brogiolidoi: 10.3390/computation4020017Computation2016-03-30Computation2016-03-3042Article1710.3390/computation4020017http://www.mdpi.com/2079-3197/4/2/17Computation, Vol. 4, Pages 16: Current Issues in Finite-T Density-Functional Theory and Warm-Correlated Matter †http://www.mdpi.com/2079-3197/4/2/16
Finite-temperature density functional theory (DFT) has become of topical interest, partly due to the increasing ability to create novel states of warm-correlated matter (WCM).Warm-dense matter (WDM), ultra-fast matter (UFM), and high-energy density matter (HEDM) may all be regarded as subclasses of WCM. Strong electron-electron, ion-ion and electron-ion correlation effects and partial degeneracies are found in these systems where the electron temperature Te is comparable to the electron Fermi energy EF. Thus, many electrons are in continuum states which are partially occupied. The ion subsystem may be solid, liquid or plasma, with many states of ionization with ionic charge Zj. Quasi-equilibria with the ion temperature Ti ≠ Te are common. The ion subsystem in WCM can no longer be treated as a passive “external potential”, as is customary in T = 0 DFT dominated by solid-state theory or quantum chemistry. Many basic questions arise in trying to implement DFT for WCM. Hohenberg-Kohn-Mermin theory can be adapted for treating these systems if suitable finite-T exchange-correlation (XC) functionals can be constructed. They are functionals of both the one-body electron density ne and the one-body ion densities ρj. Here, j counts many species of nuclei or charge states. A method of approximately but accurately mapping the quantum electrons to a classical Coulomb gas enables one to treat electron-ion systems entirely classically at any temperature and arbitrary spin polarization, using exchange-correlation effects calculated in situ, directly from the pair-distribution functions. This eliminates the need for any XC-functionals. This classical map has been used to calculate the equation of state of WDM systems, and construct a finite-T XC functional that is found to be in close agreement with recent quantum path-integral simulation data. In this review, current developments and concerns in finite-T DFT, especially in the context of non-relativistic warm-dense matter and ultra-fast matter will be presented.Computation, Vol. 4, Pages 16: Current Issues in Finite-T Density-Functional Theory and Warm-Correlated Matter †

Finite-temperature density functional theory (DFT) has become of topical interest, partly due to the increasing ability to create novel states of warm-correlated matter (WCM).Warm-dense matter (WDM), ultra-fast matter (UFM), and high-energy density matter (HEDM) may all be regarded as subclasses of WCM. Strong electron-electron, ion-ion and electron-ion correlation effects and partial degeneracies are found in these systems where the electron temperature Te is comparable to the electron Fermi energy EF. Thus, many electrons are in continuum states which are partially occupied. The ion subsystem may be solid, liquid or plasma, with many states of ionization with ionic charge Zj. Quasi-equilibria with the ion temperature Ti ≠ Te are common. The ion subsystem in WCM can no longer be treated as a passive “external potential”, as is customary in T = 0 DFT dominated by solid-state theory or quantum chemistry. Many basic questions arise in trying to implement DFT for WCM. Hohenberg-Kohn-Mermin theory can be adapted for treating these systems if suitable finite-T exchange-correlation (XC) functionals can be constructed. They are functionals of both the one-body electron density ne and the one-body ion densities ρj. Here, j counts many species of nuclei or charge states. A method of approximately but accurately mapping the quantum electrons to a classical Coulomb gas enables one to treat electron-ion systems entirely classically at any temperature and arbitrary spin polarization, using exchange-correlation effects calculated in situ, directly from the pair-distribution functions. This eliminates the need for any XC-functionals. This classical map has been used to calculate the equation of state of WDM systems, and construct a finite-T XC functional that is found to be in close agreement with recent quantum path-integral simulation data. In this review, current developments and concerns in finite-T DFT, especially in the context of non-relativistic warm-dense matter and ultra-fast matter will be presented.

]]>Current Issues in Finite-T Density-Functional Theory and Warm-Correlated Matter †M. Dharma-wardanadoi: 10.3390/computation4020016Computation2016-03-28Computation2016-03-2842Article1610.3390/computation4020016http://www.mdpi.com/2079-3197/4/2/16Computation, Vol. 4, Pages 15: Bonding Strength Effects in Hydro-Mechanical Coupling Transport in Granular Porous Media by Pore-Scale Modelinghttp://www.mdpi.com/2079-3197/4/1/15
The hydro-mechanical coupling transport process of sand production is numerically investigated with special attention paid to the bonding effect between sand grains. By coupling the lattice Boltzmann method (LBM) and the discrete element method (DEM), we are able to capture particles movements and fluid flows simultaneously. In order to account for the bonding effects on sand production, a contact bond model is introduced into the LBM-DEM framework. Our simulations first examine the experimental observation of “initial sand production is evoked by localized failure” and then show that the bonding or cement plays an important role in sand production. Lower bonding strength will lead to more sand production than higher bonding strength. It is also found that the influence of flow rate on sand production depends on the bonding strength in cemented granular media, and for low bonding strength sample, the higher the flow rate is, the more severe the erosion found in localized failure zone becomes.Computation, Vol. 4, Pages 15: Bonding Strength Effects in Hydro-Mechanical Coupling Transport in Granular Porous Media by Pore-Scale Modeling

The hydro-mechanical coupling transport process of sand production is numerically investigated with special attention paid to the bonding effect between sand grains. By coupling the lattice Boltzmann method (LBM) and the discrete element method (DEM), we are able to capture particles movements and fluid flows simultaneously. In order to account for the bonding effects on sand production, a contact bond model is introduced into the LBM-DEM framework. Our simulations first examine the experimental observation of “initial sand production is evoked by localized failure” and then show that the bonding or cement plays an important role in sand production. Lower bonding strength will lead to more sand production than higher bonding strength. It is also found that the influence of flow rate on sand production depends on the bonding strength in cemented granular media, and for low bonding strength sample, the higher the flow rate is, the more severe the erosion found in localized failure zone becomes.

]]>Bonding Strength Effects in Hydro-Mechanical Coupling Transport in Granular Porous Media by Pore-Scale ModelingZhiqiang ChenChiyu XieYu ChenMoran Wangdoi: 10.3390/computation4010015Computation2016-03-07Computation2016-03-0741Article1510.3390/computation4010015http://www.mdpi.com/2079-3197/4/1/15Computation, Vol. 4, Pages 14: Influence of the Localization of Ge Atoms within the Si(001)(4 × 2) Surface Layer on Semicore One-Electron Stateshttp://www.mdpi.com/2079-3197/4/1/14
Adsorption complexes of germanium on the reconstructed Si(001)(4 × 2) surface have been simulated by the Si96Ge2Н84 cluster. For Ge atoms located on the surface layer, DFT calculations (B3LYP/6-31G**) of their 3d semicore-level energies have shown a clear-cut correlation between the 3d5/2 chemical shifts and mutual arrangement of Ge atoms. Such a shift is positive when only one Ge atom penetrates into the crystalline substrate, while being negative for both penetrating Ge atoms. We interpret these results in terms of the charge distribution in clusters under consideration.Computation, Vol. 4, Pages 14: Influence of the Localization of Ge Atoms within the Si(001)(4 × 2) Surface Layer on Semicore One-Electron States

Adsorption complexes of germanium on the reconstructed Si(001)(4 × 2) surface have been simulated by the Si96Ge2Н84 cluster. For Ge atoms located on the surface layer, DFT calculations (B3LYP/6-31G**) of their 3d semicore-level energies have shown a clear-cut correlation between the 3d5/2 chemical shifts and mutual arrangement of Ge atoms. Such a shift is positive when only one Ge atom penetrates into the crystalline substrate, while being negative for both penetrating Ge atoms. We interpret these results in terms of the charge distribution in clusters under consideration.

]]>Influence of the Localization of Ge Atoms within the Si(001)(4 × 2) Surface Layer on Semicore One-Electron StatesOlha TkachukMaria TerebinskayaVictor LobanovAlexei Arbuznikovdoi: 10.3390/computation4010014Computation2016-03-03Computation2016-03-0341Article1410.3390/computation4010014http://www.mdpi.com/2079-3197/4/1/14Computation, Vol. 4, Pages 13: Direct Numerical Simulation of Turbulent Channel Flow on High-Performance GPU Computing Systemhttp://www.mdpi.com/2079-3197/4/1/13
The flow of a viscous fluid in a plane channel is simulated numerically following the DNS approach, and using a computational code for the numerical integration of the Navier-Stokes equations implemented on a hybrid CPU/GPU computing architecture (for the meaning of symbols and acronyms used, one can refer to the Nomenclature). Three turbulent-flow databases, each representing the turbulent statistically-steady state of the flow at three different values of the Reynolds number, are built up, and a number of statistical moments of the fluctuating velocity field are computed. For turbulent-flow-structure investigation, the vortex-detection technique of the imaginary part of the complex eigenvalue pair in the velocity-gradient tensor is applied to the fluctuating-velocity fields. As a result, and among other types, hairpin vortical structures are unveiled. The processes of evolution that characterize the hairpin vortices in the near-wall region of the turbulent channel are investigated, in particular at one of the three Reynolds numbers tested, with specific attention given to the relationship that exists between the dynamics of the vortical structures and the occurrence of ejection and sweep quadrant events. Interestingly, it is found that the latter events play a preminent role in the way in which the morphological evolution of a hairpin vortex develops over time, as related in particular to the establishment of symmetric and persistent hairpins. The present results have been obtained from a database that incorporates genuine DNS solutions of the Navier-Stokes equations, without superposition of any synthetic structures in the form of initial and/or boundary conditions for the simulations.Computation, Vol. 4, Pages 13: Direct Numerical Simulation of Turbulent Channel Flow on High-Performance GPU Computing System

The flow of a viscous fluid in a plane channel is simulated numerically following the DNS approach, and using a computational code for the numerical integration of the Navier-Stokes equations implemented on a hybrid CPU/GPU computing architecture (for the meaning of symbols and acronyms used, one can refer to the Nomenclature). Three turbulent-flow databases, each representing the turbulent statistically-steady state of the flow at three different values of the Reynolds number, are built up, and a number of statistical moments of the fluctuating velocity field are computed. For turbulent-flow-structure investigation, the vortex-detection technique of the imaginary part of the complex eigenvalue pair in the velocity-gradient tensor is applied to the fluctuating-velocity fields. As a result, and among other types, hairpin vortical structures are unveiled. The processes of evolution that characterize the hairpin vortices in the near-wall region of the turbulent channel are investigated, in particular at one of the three Reynolds numbers tested, with specific attention given to the relationship that exists between the dynamics of the vortical structures and the occurrence of ejection and sweep quadrant events. Interestingly, it is found that the latter events play a preminent role in the way in which the morphological evolution of a hairpin vortex develops over time, as related in particular to the establishment of symmetric and persistent hairpins. The present results have been obtained from a database that incorporates genuine DNS solutions of the Navier-Stokes equations, without superposition of any synthetic structures in the form of initial and/or boundary conditions for the simulations.

]]>Direct Numerical Simulation of Turbulent Channel Flow on High-Performance GPU Computing SystemGiancarlo AlfonsiStefania CilibertiMarco ManciniLeonardo Primaveradoi: 10.3390/computation4010013Computation2016-02-26Computation2016-02-2641Article1310.3390/computation4010013http://www.mdpi.com/2079-3197/4/1/13Computation, Vol. 4, Pages 12: Contact Angle Effects on Pore and Corner Arc Menisci in Polygonal Capillary Tubes Studied with the Pseudopotential Multiphase Lattice Boltzmann Modelhttp://www.mdpi.com/2079-3197/4/1/12
In porous media, pore geometry and wettability are determinant factors for capillary flow in drainage or imbibition. Pores are often considered as cylindrical tubes in analytical or computational studies. Such simplification prevents the capture of phenomena occurring in pore corners. Considering the corners of pores is crucial to realistically study capillary flow and to accurately estimate liquid distribution, degree of saturation and dynamic liquid behavior in pores and in porous media. In this study, capillary flow in polygonal tubes is studied with the Shan-Chen pseudopotential multiphase lattice Boltzmann model (LBM). The LB model is first validated through a contact angle test and a capillary intrusion test. Then capillary rise in square and triangular tubes is simulated and the pore meniscus height is investigated as a function of contact angle θ. Also, the occurrence of fluid in the tube corners, referred to as corner arc menisci, is studied in terms of curvature versus degree of saturation. In polygonal capillary tubes, the number of sides leads to a critical contact angle θc which is known as a key parameter for the existence of the two configurations. LBM succeeds in simulating the formation of a pore meniscus at θ &amp;gt; θc or the occurrence of corner arc menisci at θ &amp;lt; θc. The curvature of corner arc menisci is known to decrease with increasing saturation and decreasing contact angle as described by the Mayer and Stoewe-Princen (MS-P) theory. We obtain simulation results that are in good qualitative and quantitative agreement with the analytical solutions in terms of height of pore meniscus versus contact angle and curvature of corner arc menisci versus saturation degree. LBM is a suitable and promising tool for a better understanding of the complicated phenomena of multiphase flow in porous media.Computation, Vol. 4, Pages 12: Contact Angle Effects on Pore and Corner Arc Menisci in Polygonal Capillary Tubes Studied with the Pseudopotential Multiphase Lattice Boltzmann Model

In porous media, pore geometry and wettability are determinant factors for capillary flow in drainage or imbibition. Pores are often considered as cylindrical tubes in analytical or computational studies. Such simplification prevents the capture of phenomena occurring in pore corners. Considering the corners of pores is crucial to realistically study capillary flow and to accurately estimate liquid distribution, degree of saturation and dynamic liquid behavior in pores and in porous media. In this study, capillary flow in polygonal tubes is studied with the Shan-Chen pseudopotential multiphase lattice Boltzmann model (LBM). The LB model is first validated through a contact angle test and a capillary intrusion test. Then capillary rise in square and triangular tubes is simulated and the pore meniscus height is investigated as a function of contact angle θ. Also, the occurrence of fluid in the tube corners, referred to as corner arc menisci, is studied in terms of curvature versus degree of saturation. In polygonal capillary tubes, the number of sides leads to a critical contact angle θc which is known as a key parameter for the existence of the two configurations. LBM succeeds in simulating the formation of a pore meniscus at θ &amp;gt; θc or the occurrence of corner arc menisci at θ &amp;lt; θc. The curvature of corner arc menisci is known to decrease with increasing saturation and decreasing contact angle as described by the Mayer and Stoewe-Princen (MS-P) theory. We obtain simulation results that are in good qualitative and quantitative agreement with the analytical solutions in terms of height of pore meniscus versus contact angle and curvature of corner arc menisci versus saturation degree. LBM is a suitable and promising tool for a better understanding of the complicated phenomena of multiphase flow in porous media.

]]>Contact Angle Effects on Pore and Corner Arc Menisci in Polygonal Capillary Tubes Studied with the Pseudopotential Multiphase Lattice Boltzmann ModelSoyoun SonLi ChenQinjun KangDominique DeromeJan Carmelietdoi: 10.3390/computation4010012Computation2016-02-20Computation2016-02-2041Article1210.3390/computation4010012http://www.mdpi.com/2079-3197/4/1/12Computation, Vol. 4, Pages 11: Enhancing Computational Precision for Lattice Boltzmann Schemes in Porous Media Flowshttp://www.mdpi.com/2079-3197/4/1/11
We reassess a method for increasing the computational accuracy of lattice Boltzmann schemes by a simple transformation of the distribution function originally proposed by Skordos which was found to give a marginal increase in accuracy in the original paper. We restate the method and give further important implementation considerations which were missed in the original work and show that this method can in fact enhance the precision of velocity field calculations by orders of magnitude and does not lose accuracy when velocities are small, unlike the usual LB approach. The analysis is framed within the multiple-relaxation-time method for porous media flows, however the approach extends directly to other lattice Boltzmann schemes. First, we compute the flow between parallel plates and compare the error from the analytical profile for the traditional approach and the transformed scheme using single (4-byte) and double (8-byte) precision. Then we compute the flow inside a complex-structured porous medium and show that the traditional approach using single precision leads to large, systematic errors compared to double precision, whereas the transformed approach avoids this issue whilst maintaining all the computational efficiency benefits of using single precision.Computation, Vol. 4, Pages 11: Enhancing Computational Precision for Lattice Boltzmann Schemes in Porous Media Flows

We reassess a method for increasing the computational accuracy of lattice Boltzmann schemes by a simple transformation of the distribution function originally proposed by Skordos which was found to give a marginal increase in accuracy in the original paper. We restate the method and give further important implementation considerations which were missed in the original work and show that this method can in fact enhance the precision of velocity field calculations by orders of magnitude and does not lose accuracy when velocities are small, unlike the usual LB approach. The analysis is framed within the multiple-relaxation-time method for porous media flows, however the approach extends directly to other lattice Boltzmann schemes. First, we compute the flow between parallel plates and compare the error from the analytical profile for the traditional approach and the transformed scheme using single (4-byte) and double (8-byte) precision. Then we compute the flow inside a complex-structured porous medium and show that the traditional approach using single precision leads to large, systematic errors compared to double precision, whereas the transformed approach avoids this issue whilst maintaining all the computational efficiency benefits of using single precision.

]]>Enhancing Computational Precision for Lattice Boltzmann Schemes in Porous Media FlowsFarrel GrayEdo Boekdoi: 10.3390/computation4010011Computation2016-02-17Computation2016-02-1741Article1110.3390/computation4010011http://www.mdpi.com/2079-3197/4/1/11Computation, Vol. 4, Pages 9: A New Method to Infer Advancement of Saline Front in Coastal Groundwater Systems by 3D: The Case of Bari (Southern Italy) Fractured Aquiferhttp://www.mdpi.com/2079-3197/4/1/9
A new method to study 3D saline front advancement in coastal fractured aquifers has been presented. Field groundwater salinity was measured in boreholes of the Bari (Southern Italy) coastal aquifer with depth below water table. Then, the Ghyben-Herzberg freshwater/saltwater (50%) sharp interface and saline front position were determined by model simulations of the freshwater flow in groundwater. Afterward, the best-fit procedure between groundwater salinity measurements, at assigned water depth of 1.0 m in boreholes, and distances of each borehole from the modelled freshwater/saltwater saline front was used to convert each position (x, y) in groundwater to the water salinity concentration at depth of 1.0 m. Moreover, a second best-fit procedure was applied to the salinity measurements in boreholes with depth z. These results provided a grid file (x, y, z, salinity) suitable for plotting the actual Bari aquifer salinity by 3D maps. Subsequently, in order to assess effects of pumping on the saltwater-freshwater transition zone in the coastal aquifer, the Navier-Stokes (N-S) equations were applied to study transient density-driven flow and salt mass transport into freshwater of a single fracture. The rate of seawater/freshwater interface advancement given by the N-S solution was used to define the progression of saline front in Bari groundwater, starting from the actual salinity 3D map. The impact of pumping of 335 L·s−1 during the transition period of 112.8 days was easily highlighted on 3D salinity maps of Bari aquifer.Computation, Vol. 4, Pages 9: A New Method to Infer Advancement of Saline Front in Coastal Groundwater Systems by 3D: The Case of Bari (Southern Italy) Fractured Aquifer

A new method to study 3D saline front advancement in coastal fractured aquifers has been presented. Field groundwater salinity was measured in boreholes of the Bari (Southern Italy) coastal aquifer with depth below water table. Then, the Ghyben-Herzberg freshwater/saltwater (50%) sharp interface and saline front position were determined by model simulations of the freshwater flow in groundwater. Afterward, the best-fit procedure between groundwater salinity measurements, at assigned water depth of 1.0 m in boreholes, and distances of each borehole from the modelled freshwater/saltwater saline front was used to convert each position (x, y) in groundwater to the water salinity concentration at depth of 1.0 m. Moreover, a second best-fit procedure was applied to the salinity measurements in boreholes with depth z. These results provided a grid file (x, y, z, salinity) suitable for plotting the actual Bari aquifer salinity by 3D maps. Subsequently, in order to assess effects of pumping on the saltwater-freshwater transition zone in the coastal aquifer, the Navier-Stokes (N-S) equations were applied to study transient density-driven flow and salt mass transport into freshwater of a single fracture. The rate of seawater/freshwater interface advancement given by the N-S solution was used to define the progression of saline front in Bari groundwater, starting from the actual salinity 3D map. The impact of pumping of 335 L·s−1 during the transition period of 112.8 days was easily highlighted on 3D salinity maps of Bari aquifer.

]]>A New Method to Infer Advancement of Saline Front in Coastal Groundwater Systems by 3D: The Case of Bari (Southern Italy) Fractured AquiferCostantino MasciopintoDomenico Palmiottadoi: 10.3390/computation4010009Computation2016-02-16Computation2016-02-1641Article910.3390/computation4010009http://www.mdpi.com/2079-3197/4/1/9Computation, Vol. 4, Pages 8: CFD Simulation and Experimental Analyses of a Copper Wire Woven Heat Exchanger Design to Improve Heat Transfer and Reduce the Size of Adsorption Bedshttp://www.mdpi.com/2079-3197/4/1/8
The chief objective of this study is the proposal design and CFD simulation of a new compacted copper wire woven fin heat exchanger and silica gel adsorbent bed used as part of an adsorption refrigeration system. This type of heat exchanger design has a large surface area because of the wire woven fin design. It is estimated that this will help improve the coefficient of performance (COP) of the adsorption phase and increase the heat transfer in this system arrangement. To study the heat transfer between the fins and porous adsorbent reactor bed, two experiments were carried out and matched to computational fluid dynamics (CFD) results.Computation, Vol. 4, Pages 8: CFD Simulation and Experimental Analyses of a Copper Wire Woven Heat Exchanger Design to Improve Heat Transfer and Reduce the Size of Adsorption Beds

The chief objective of this study is the proposal design and CFD simulation of a new compacted copper wire woven fin heat exchanger and silica gel adsorbent bed used as part of an adsorption refrigeration system. This type of heat exchanger design has a large surface area because of the wire woven fin design. It is estimated that this will help improve the coefficient of performance (COP) of the adsorption phase and increase the heat transfer in this system arrangement. To study the heat transfer between the fins and porous adsorbent reactor bed, two experiments were carried out and matched to computational fluid dynamics (CFD) results.

]]>CFD Simulation and Experimental Analyses of a Copper Wire Woven Heat Exchanger Design to Improve Heat Transfer and Reduce the Size of Adsorption BedsJohn Whitedoi: 10.3390/computation4010008Computation2016-02-06Computation2016-02-0641Review810.3390/computation4010008http://www.mdpi.com/2079-3197/4/1/8Computation, Vol. 4, Pages 7: Applications of Computational Modelling and Simulation of Porous Medium in Tissue Engineeringhttp://www.mdpi.com/2079-3197/4/1/7
In tissue engineering, porous biodegradable scaffolds are used as templates for regenerating required tissues. With the advances in computational tools, many modeling approaches have been considered. For example, fluid flow through porous medium can be modeled using the Brinkman equation where permeability of the porous medium has to be defined. In this review, we summarize various models recently reported for defining permeability and non-invasive pressure drop monitoring as a tool to validate dynamic changes in permeability. We also summarize some models used for scaffold degradation and integrating mass transport in the simulation.Computation, Vol. 4, Pages 7: Applications of Computational Modelling and Simulation of Porous Medium in Tissue Engineering

In tissue engineering, porous biodegradable scaffolds are used as templates for regenerating required tissues. With the advances in computational tools, many modeling approaches have been considered. For example, fluid flow through porous medium can be modeled using the Brinkman equation where permeability of the porous medium has to be defined. In this review, we summarize various models recently reported for defining permeability and non-invasive pressure drop monitoring as a tool to validate dynamic changes in permeability. We also summarize some models used for scaffold degradation and integrating mass transport in the simulation.

]]>Applications of Computational Modelling and Simulation of Porous Medium in Tissue EngineeringCarrie GermanSundararajan Madihallydoi: 10.3390/computation4010007Computation2016-02-06Computation2016-02-0641Article710.3390/computation4010007http://www.mdpi.com/2079-3197/4/1/7Computation, Vol. 4, Pages 10: Localized Polycentric Orbital Basis Set for Quantum Monte Carlo Calculations Derived from the Decomposition of Kohn-Sham Optimized Orbitalshttp://www.mdpi.com/2079-3197/4/1/10
In this work, we present a simple decomposition scheme of the Kohn-Sham optimized orbitals which is able to provide a reduced basis set, made of localized polycentric orbitals, specifically designed for Quantum Monte Carlo. The decomposition follows a standard Density functional theory (DFT) calculation and is based on atomic connectivity and shell structure. The new orbitals are used to construct a compact correlated wave function of the Slater–Jastrow form which is optimized at the Variational Monte Carlo level and then used as the trial wave function for a final Diffusion Monte Carlo accurate energy calculation. We are able, in this way, to capture the basic information on the real system brought by the Kohn-Sham orbitals and use it for the calculation of the ground state energy within a strictly variational method. Here, we show test calculations performed on some small selected systems to assess the validity of the proposed approach in a molecular fragmentation, in the calculation of a barrier height of a chemical reaction and in the determination of intermolecular potentials. The final Diffusion Monte Carlo energies are in very good agreement with the best literature data within chemical accuracy.Computation, Vol. 4, Pages 10: Localized Polycentric Orbital Basis Set for Quantum Monte Carlo Calculations Derived from the Decomposition of Kohn-Sham Optimized Orbitals

In this work, we present a simple decomposition scheme of the Kohn-Sham optimized orbitals which is able to provide a reduced basis set, made of localized polycentric orbitals, specifically designed for Quantum Monte Carlo. The decomposition follows a standard Density functional theory (DFT) calculation and is based on atomic connectivity and shell structure. The new orbitals are used to construct a compact correlated wave function of the Slater–Jastrow form which is optimized at the Variational Monte Carlo level and then used as the trial wave function for a final Diffusion Monte Carlo accurate energy calculation. We are able, in this way, to capture the basic information on the real system brought by the Kohn-Sham orbitals and use it for the calculation of the ground state energy within a strictly variational method. Here, we show test calculations performed on some small selected systems to assess the validity of the proposed approach in a molecular fragmentation, in the calculation of a barrier height of a chemical reaction and in the determination of intermolecular potentials. The final Diffusion Monte Carlo energies are in very good agreement with the best literature data within chemical accuracy.

]]>Localized Polycentric Orbital Basis Set for Quantum Monte Carlo Calculations Derived from the Decomposition of Kohn-Sham Optimized OrbitalsClaudio AmovilliFranca FlorisAndrea Grisafidoi: 10.3390/computation4010010Computation2016-02-06Computation2016-02-0641Article1010.3390/computation4010010http://www.mdpi.com/2079-3197/4/1/10Computation, Vol. 4, Pages 6: Computation of the Likelihood of Joint Site Frequency Spectra Using Orthogonal Polynomialshttp://www.mdpi.com/2079-3197/4/1/6
In population genetics, information about evolutionary forces, e.g., mutation, selection and genetic drift, is often inferred from DNA sequence information. Generally, DNA consists of two long strands of nucleotides or sites that pair via the complementary bases cytosine and guanine (C and G), on the one hand, and adenine and thymine (A and T), on the other. With whole genome sequencing, most genomic information stored in the DNA has become available for multiple individuals of one or more populations, at least in humans and model species, such as fruit flies of the genus Drosophila. In a genome-wide sample of L sites for M (haploid) individuals, the state of each site may be made binary, by binning the complementary bases, e.g., C with G to C/G, and contrasting C/G to A/T, to obtain a “site frequency spectrum” (SFS). Two such samples of either a single population from different time-points or two related populations from a single time-point are called joint site frequency spectra (joint SFS). While mathematical models describing the interplay of mutation, drift and selection have been available for more than 80 years, calculation of exact likelihoods from joint SFS is difficult. Sufficient statistics for inference of, e.g., mutation or selection parameters that would make use of all the information in the genomic data are rarely available. Hence, often suites of crude summary statistics are combined in simulation-based computational approaches. In this article, we use a bi-allelic boundary-mutation and drift population genetic model to compute the transition probabilities of joint SFS using orthogonal polynomials. This allows inference of population genetic parameters, such as the mutation rate (scaled by the population size) and the time separating the two samples. We apply this inference method to a population dataset of neutrally-evolving short intronic sites from six DNA sequences of the fruit fly Drosophila melanogaster and the reference sequence of the related species Drosophila sechellia.Computation, Vol. 4, Pages 6: Computation of the Likelihood of Joint Site Frequency Spectra Using Orthogonal Polynomials

In population genetics, information about evolutionary forces, e.g., mutation, selection and genetic drift, is often inferred from DNA sequence information. Generally, DNA consists of two long strands of nucleotides or sites that pair via the complementary bases cytosine and guanine (C and G), on the one hand, and adenine and thymine (A and T), on the other. With whole genome sequencing, most genomic information stored in the DNA has become available for multiple individuals of one or more populations, at least in humans and model species, such as fruit flies of the genus Drosophila. In a genome-wide sample of L sites for M (haploid) individuals, the state of each site may be made binary, by binning the complementary bases, e.g., C with G to C/G, and contrasting C/G to A/T, to obtain a “site frequency spectrum” (SFS). Two such samples of either a single population from different time-points or two related populations from a single time-point are called joint site frequency spectra (joint SFS). While mathematical models describing the interplay of mutation, drift and selection have been available for more than 80 years, calculation of exact likelihoods from joint SFS is difficult. Sufficient statistics for inference of, e.g., mutation or selection parameters that would make use of all the information in the genomic data are rarely available. Hence, often suites of crude summary statistics are combined in simulation-based computational approaches. In this article, we use a bi-allelic boundary-mutation and drift population genetic model to compute the transition probabilities of joint SFS using orthogonal polynomials. This allows inference of population genetic parameters, such as the mutation rate (scaled by the population size) and the time separating the two samples. We apply this inference method to a population dataset of neutrally-evolving short intronic sites from six DNA sequences of the fruit fly Drosophila melanogaster and the reference sequence of the related species Drosophila sechellia.

]]>Computation of the Likelihood of Joint Site Frequency Spectra Using Orthogonal PolynomialsClaus VoglJuraj Bergmandoi: 10.3390/computation4010006Computation2016-02-04Computation2016-02-0441Article610.3390/computation4010006http://www.mdpi.com/2079-3197/4/1/6Computation, Vol. 4, Pages 5: Extracting Conformational Ensembles of Small Molecules from Molecular Dynamics Simulations: Ampicillin as a Test Casehttp://www.mdpi.com/2079-3197/4/1/5
The accurate and exhaustive description of the conformational ensemble sampled by small molecules in solution, possibly at different physiological conditions, is of primary interest in many fields of medicinal chemistry and computational biology. Recently, we have built an on-line database of compounds with antimicrobial properties, where we provide all-atom force-field parameters and a set of molecular properties, including representative structures extracted from cluster analysis over μs-long molecular dynamics (MD) trajectories. In the present work, we used a medium-sized antibiotic from our sample, namely ampicillin, to assess the quality of the conformational ensemble. To this aim, we compared the conformational landscape extracted from previous unbiased MD simulations to those obtained by means of Replica Exchange MD (REMD) and those originating from three freely-available conformer generation tools widely adopted in computer-aided drug-design. In addition, for different charge/protonation states of ampicillin, we made available force-field parameters and static/dynamic properties derived from both Density Functional Theory and MD calculations. For the specific system investigated here, we found that: (i) the conformational statistics extracted from plain MD simulations is consistent with that obtained from REMD simulations; (ii) overall, our MD-based approach performs slightly better than any of the conformer generator tools if one takes into account both the diversity of the generated conformational set and the ability to reproduce experimentally-determined structures.Computation, Vol. 4, Pages 5: Extracting Conformational Ensembles of Small Molecules from Molecular Dynamics Simulations: Ampicillin as a Test Case

The accurate and exhaustive description of the conformational ensemble sampled by small molecules in solution, possibly at different physiological conditions, is of primary interest in many fields of medicinal chemistry and computational biology. Recently, we have built an on-line database of compounds with antimicrobial properties, where we provide all-atom force-field parameters and a set of molecular properties, including representative structures extracted from cluster analysis over μs-long molecular dynamics (MD) trajectories. In the present work, we used a medium-sized antibiotic from our sample, namely ampicillin, to assess the quality of the conformational ensemble. To this aim, we compared the conformational landscape extracted from previous unbiased MD simulations to those obtained by means of Replica Exchange MD (REMD) and those originating from three freely-available conformer generation tools widely adopted in computer-aided drug-design. In addition, for different charge/protonation states of ampicillin, we made available force-field parameters and static/dynamic properties derived from both Density Functional Theory and MD calculations. For the specific system investigated here, we found that: (i) the conformational statistics extracted from plain MD simulations is consistent with that obtained from REMD simulations; (ii) overall, our MD-based approach performs slightly better than any of the conformer generator tools if one takes into account both the diversity of the generated conformational set and the ability to reproduce experimentally-determined structures.

]]>Extracting Conformational Ensembles of Small Molecules from Molecular Dynamics Simulations: Ampicillin as a Test CaseGiuliano MallociGiovanni SerraAndrea BosinAttilio Vargiudoi: 10.3390/computation4010005Computation2016-01-26Computation2016-01-2641Article510.3390/computation4010005http://www.mdpi.com/2079-3197/4/1/5Computation, Vol. 4, Pages 4: Acknowledgement to Reviewers of Computation in 2015http://www.mdpi.com/2079-3197/4/1/4
The editors of Computation would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...]Computation, Vol. 4, Pages 4: Acknowledgement to Reviewers of Computation in 2015

The editors of Computation would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...]

]]>Acknowledgement to Reviewers of Computation in 2015 Computation Editorial Officedoi: 10.3390/computation4010004Computation2016-01-22Computation2016-01-2241Editorial410.3390/computation4010004http://www.mdpi.com/2079-3197/4/1/4Computation, Vol. 4, Pages 3: A Test of Various Partial Atomic Charge Models for Computations on Diheteroaryl Ketones and Thioketoneshttp://www.mdpi.com/2079-3197/4/1/3
The effective use of partial atomic charge models is essential for such purposes in molecular computations as a simplified representation of global charge distribution in a molecule and predicting its conformational behavior. In this work, ten of the most popular models of partial atomic charge are taken into consideration, and these models operate on the molecular wave functions/electron densities of five diheteroaryl ketones and their thiocarbonyl analogs. The ten models are tested in order to assess their usefulness in achieving the aforementioned purposes for the compounds in title. Therefore, the following criteria are used in the test: (1) how accurately these models reproduce the molecular dipole moments of the conformers of the investigated compounds; (2) whether these models are able to correctly determine the preferred conformer as well as the ordering of higher-energy conformers for each compound. The results of the test indicate that the Merz-Kollman-Singh (MKS) and Hu-Lu-Yang (HLY) models approximate the magnitude of the molecular dipole moments with the greatest accuracy. The natural partial atomic charges perform best in determining the conformational behavior of the investigated compounds. These findings may constitute important support for the effective computations of electrostatic effects occurring within and between the molecules of the compounds in question as well as similar compounds.Computation, Vol. 4, Pages 3: A Test of Various Partial Atomic Charge Models for Computations on Diheteroaryl Ketones and Thioketones

The effective use of partial atomic charge models is essential for such purposes in molecular computations as a simplified representation of global charge distribution in a molecule and predicting its conformational behavior. In this work, ten of the most popular models of partial atomic charge are taken into consideration, and these models operate on the molecular wave functions/electron densities of five diheteroaryl ketones and their thiocarbonyl analogs. The ten models are tested in order to assess their usefulness in achieving the aforementioned purposes for the compounds in title. Therefore, the following criteria are used in the test: (1) how accurately these models reproduce the molecular dipole moments of the conformers of the investigated compounds; (2) whether these models are able to correctly determine the preferred conformer as well as the ordering of higher-energy conformers for each compound. The results of the test indicate that the Merz-Kollman-Singh (MKS) and Hu-Lu-Yang (HLY) models approximate the magnitude of the molecular dipole moments with the greatest accuracy. The natural partial atomic charges perform best in determining the conformational behavior of the investigated compounds. These findings may constitute important support for the effective computations of electrostatic effects occurring within and between the molecules of the compounds in question as well as similar compounds.

]]>A Test of Various Partial Atomic Charge Models for Computations on Diheteroaryl Ketones and ThioketonesPiotr Matczakdoi: 10.3390/computation4010003Computation2016-01-19Computation2016-01-1941Article310.3390/computation4010003http://www.mdpi.com/2079-3197/4/1/3Computation, Vol. 4, Pages 2: Modeling Groundwater Flow in Heterogeneous Porous Media with YAGModhttp://www.mdpi.com/2079-3197/4/1/2
Modeling flow and transport in porous media requires the management of complexities related both to physical processes and to subsurface heterogeneity. A thorough approach needs a great number of spatially-distributed phenomenological parameters, which are seldom measured in the field. For instance, modeling a phreatic aquifer under high water extraction rates is very challenging, because it requires the simulation of variably-saturated flow. 3D steady groundwater flow is modeled with YAGMod (yet another groundwater flow model), a model based on a finite-difference conservative scheme and implemented in a computer code developed in Fortran90. YAGMod simulates also the presence of partially-saturated or dry cells. The proposed algorithm and other alternative methods developed to manage dry cells in the case of depleted aquifers are analyzed and compared to a simple test. Different approaches yield different solutions, among which, it is not possible to select the best one on the basis of physical arguments. A possible advantage of YAGMod is that no additional non-physical parameter is needed to overcome the numerical difficulties arising to handle drained cells. YAGMod also includes a module that allows one to identify the conductivity field for a phreatic aquifer by solving an inverse problem with the comparison model method.Computation, Vol. 4, Pages 2: Modeling Groundwater Flow in Heterogeneous Porous Media with YAGMod

Modeling flow and transport in porous media requires the management of complexities related both to physical processes and to subsurface heterogeneity. A thorough approach needs a great number of spatially-distributed phenomenological parameters, which are seldom measured in the field. For instance, modeling a phreatic aquifer under high water extraction rates is very challenging, because it requires the simulation of variably-saturated flow. 3D steady groundwater flow is modeled with YAGMod (yet another groundwater flow model), a model based on a finite-difference conservative scheme and implemented in a computer code developed in Fortran90. YAGMod simulates also the presence of partially-saturated or dry cells. The proposed algorithm and other alternative methods developed to manage dry cells in the case of depleted aquifers are analyzed and compared to a simple test. Different approaches yield different solutions, among which, it is not possible to select the best one on the basis of physical arguments. A possible advantage of YAGMod is that no additional non-physical parameter is needed to overcome the numerical difficulties arising to handle drained cells. YAGMod also includes a module that allows one to identify the conductivity field for a phreatic aquifer by solving an inverse problem with the comparison model method.

]]>Modeling Groundwater Flow in Heterogeneous Porous Media with YAGModLaura CattaneoAlessandro ComunianGiovanna de FilippisMauro GiudiciChiara Vassenadoi: 10.3390/computation4010002Computation2015-12-29Computation2015-12-2941Article210.3390/computation4010002http://www.mdpi.com/2079-3197/4/1/2Computation, Vol. 4, Pages 1: Reduced Numerical Model for Methane Hydrate Formation under Conditions of Variable Salinity. Time-Stepping Variants and Sensitivityhttp://www.mdpi.com/2079-3197/4/1/1
In this paper, we consider a reduced computational model of methane hydrate formation in variable salinity conditions, and give details on the discretization and phase equilibria implementation. We describe three time-stepping variants: Implicit, Semi-implicit, and Sequential, and we compare the accuracy and efficiency of these variants depending on the spatial and temporal discretization parameters. We also study the sensitivity of the model to the simulation parameters and in particular to the reduced phase equilibria model.Computation, Vol. 4, Pages 1: Reduced Numerical Model for Methane Hydrate Formation under Conditions of Variable Salinity. Time-Stepping Variants and Sensitivity

In this paper, we consider a reduced computational model of methane hydrate formation in variable salinity conditions, and give details on the discretization and phase equilibria implementation. We describe three time-stepping variants: Implicit, Semi-implicit, and Sequential, and we compare the accuracy and efficiency of these variants depending on the spatial and temporal discretization parameters. We also study the sensitivity of the model to the simulation parameters and in particular to the reduced phase equilibria model.

]]>Reduced Numerical Model for Methane Hydrate Formation under Conditions of Variable Salinity. Time-Stepping Variants and SensitivityMalgorzata PeszynskaFrancis MedinaWei-Li HongMarta Torresdoi: 10.3390/computation4010001Computation2015-12-24Computation2015-12-2441Article110.3390/computation4010001http://www.mdpi.com/2079-3197/4/1/1Computation, Vol. 3, Pages 701-713: Exact Likelihood Calculation under the Infinite Sites Modelhttp://www.mdpi.com/2079-3197/3/4/701
A key parameter in population genetics is the scaled mutation rate θ = 4 N μ , where N is the effective haploid population size and μ is the mutation rate per haplotype per generation. While exact likelihood inference is notoriously difficult in population genetics, we propose a novel approach to compute a first order accurate likelihood of θ that is based on dynamic programming under the infinite sites model without recombination. The parameter θ may be either constant, i.e., time-independent, or time-dependent, which allows for changes of demography and deviations from neutral equilibrium. For time-independent θ, the performance is compared to the approach in Griffiths and Tavaré’s work “Simulating Probability Distributions in the Coalescent” (Theor. Popul. Biol. 1994, 46, 131–159) that is based on importance sampling and implemented in the “genetree” program. Roughly, the proposed method is computationally fast when n × θ &amp;lt; 100 , where n is the sample size. For time-dependent θ ( t ) , we analyze a simple demographic model with a single change in θ ( t ) . In this case, the ancestral and current θ need to be estimated, as well as the time of change. To our knowledge, this is the first accurate computation of a likelihood in the infinite sites model with non-equilibrium demography.Computation, Vol. 3, Pages 701-713: Exact Likelihood Calculation under the Infinite Sites Model

A key parameter in population genetics is the scaled mutation rate θ = 4 N μ , where N is the effective haploid population size and μ is the mutation rate per haplotype per generation. While exact likelihood inference is notoriously difficult in population genetics, we propose a novel approach to compute a first order accurate likelihood of θ that is based on dynamic programming under the infinite sites model without recombination. The parameter θ may be either constant, i.e., time-independent, or time-dependent, which allows for changes of demography and deviations from neutral equilibrium. For time-independent θ, the performance is compared to the approach in Griffiths and Tavaré’s work “Simulating Probability Distributions in the Coalescent” (Theor. Popul. Biol. 1994, 46, 131–159) that is based on importance sampling and implemented in the “genetree” program. Roughly, the proposed method is computationally fast when n × θ &amp;lt; 100 , where n is the sample size. For time-dependent θ ( t ) , we analyze a simple demographic model with a single change in θ ( t ) . In this case, the ancestral and current θ need to be estimated, as well as the time of change. To our knowledge, this is the first accurate computation of a likelihood in the infinite sites model with non-equilibrium demography.

]]>Exact Likelihood Calculation under the Infinite Sites ModelMuhammad FaisalAndreas FutschikClaus Vogldoi: 10.3390/computation3040701Computation2015-12-11Computation2015-12-1134Article70171310.3390/computation3040701http://www.mdpi.com/2079-3197/3/4/701Computation, Vol. 3, Pages 687-700: Molecular Simulation of Shale Gas Adsorption and Diffusion in Clay Nanoporeshttp://www.mdpi.com/2079-3197/3/4/687
The present work aims to study the adsorption behavior and dynamical properties of CH4 in clay slit pore with or without cation exchange structures at sizes of 1.0 nm–4.0 nm using grand canonical Monte Carlo (GCMC) and molecular dynamics (MD) methods. The adsorption isotherms of CH4 have been investigated by GCMC simulations at different temperatures and various pore sizes. In the montmorillonite (MMT) clays without a cation exchange structure, from the density profile, we find the molecules preferentially adsorb onto the surface, and only an obvious single layer was observed. The general trend within slit pores is that with increasing pore width, the adsorbed amount will increase. However, the larger pores exhibit lower excess density and the smaller pores exhibit higher excess density. The preloaded water will reduce CH4 sorption. The in plane self-diffusion coefficient of CH4 which is investigated by MD simulations combined with Einstein fluid equation increases rapidly with the pore size increasing at low pressure. Under these given conditions, the effect of temperature has little influence on the in-plane self-diffusion coefficient. In the MMT clays with cation exchange structure, cation exchange has little effect on CH4 adsorption and self-diffusion.Computation, Vol. 3, Pages 687-700: Molecular Simulation of Shale Gas Adsorption and Diffusion in Clay Nanopores

The present work aims to study the adsorption behavior and dynamical properties of CH4 in clay slit pore with or without cation exchange structures at sizes of 1.0 nm–4.0 nm using grand canonical Monte Carlo (GCMC) and molecular dynamics (MD) methods. The adsorption isotherms of CH4 have been investigated by GCMC simulations at different temperatures and various pore sizes. In the montmorillonite (MMT) clays without a cation exchange structure, from the density profile, we find the molecules preferentially adsorb onto the surface, and only an obvious single layer was observed. The general trend within slit pores is that with increasing pore width, the adsorbed amount will increase. However, the larger pores exhibit lower excess density and the smaller pores exhibit higher excess density. The preloaded water will reduce CH4 sorption. The in plane self-diffusion coefficient of CH4 which is investigated by MD simulations combined with Einstein fluid equation increases rapidly with the pore size increasing at low pressure. Under these given conditions, the effect of temperature has little influence on the in-plane self-diffusion coefficient. In the MMT clays with cation exchange structure, cation exchange has little effect on CH4 adsorption and self-diffusion.

]]>Molecular Simulation of Shale Gas Adsorption and Diffusion in Clay NanoporesHongguang SuiJun YaoLei Zhangdoi: 10.3390/computation3040687Computation2015-12-11Computation2015-12-1134Article68770010.3390/computation3040687http://www.mdpi.com/2079-3197/3/4/687Computation, Vol. 3, Pages 670-686: Multiscale Simulations for Coupled Flow and Transport Using the Generalized Multiscale Finite Element Methodhttp://www.mdpi.com/2079-3197/3/4/670
In this paper, we develop a mass conservative multiscale method for coupled flow and transport in heterogeneous porous media. We consider a coupled system consisting of a convection-dominated transport equation and a flow equation. We construct a coarse grid solver based on the Generalized Multiscale Finite Element Method (GMsFEM) for a coupled system. In particular, multiscale basis functions are constructed based on some snapshot spaces for the pressure and the concentration equations and some local spectral decompositions in the snapshot spaces. The resulting approach uses a few multiscale basis functions in each coarse block (for both the pressure and the concentration) to solve the coupled system. We use the mixed framework, which allows mass conservation. Our main contributions are: (1) the development of a mass conservative GMsFEM for the coupled flow and transport; (2) the development of a robust multiscale method for convection-dominated transport problems by choosing appropriate test and trial spaces within Petrov-Galerkin mixed formulation. We present numerical results and consider several heterogeneous permeability fields. Our numerical results show that with only a few basis functions per coarse block, we can achieve a good approximation.Computation, Vol. 3, Pages 670-686: Multiscale Simulations for Coupled Flow and Transport Using the Generalized Multiscale Finite Element Method

In this paper, we develop a mass conservative multiscale method for coupled flow and transport in heterogeneous porous media. We consider a coupled system consisting of a convection-dominated transport equation and a flow equation. We construct a coarse grid solver based on the Generalized Multiscale Finite Element Method (GMsFEM) for a coupled system. In particular, multiscale basis functions are constructed based on some snapshot spaces for the pressure and the concentration equations and some local spectral decompositions in the snapshot spaces. The resulting approach uses a few multiscale basis functions in each coarse block (for both the pressure and the concentration) to solve the coupled system. We use the mixed framework, which allows mass conservation. Our main contributions are: (1) the development of a mass conservative GMsFEM for the coupled flow and transport; (2) the development of a robust multiscale method for convection-dominated transport problems by choosing appropriate test and trial spaces within Petrov-Galerkin mixed formulation. We present numerical results and consider several heterogeneous permeability fields. Our numerical results show that with only a few basis functions per coarse block, we can achieve a good approximation.

]]>Multiscale Simulations for Coupled Flow and Transport Using the Generalized Multiscale Finite Element MethodEric ChungYalchin EfendievWing LeungJun Rendoi: 10.3390/computation3040670Computation2015-12-11Computation2015-12-1134Article67068610.3390/computation3040670http://www.mdpi.com/2079-3197/3/4/670Computation, Vol. 3, Pages 657-669: Optical Properties of Silicon-Rich Silicon Nitride (SixNyHz) from First Principleshttp://www.mdpi.com/2079-3197/3/4/657
The real and imaginary parts of the complex refractive index of SixNyHz have been calculated from first principles. Optical spectra for reflectivity, absorption coefficient, energy-loss function (ELF), and refractive index were obtained. The results for Si3N4 are in agreement with the available theoretical and experimental results. To understand the electron energy loss mechanism in Si-rich silicon nitride, the influence of the Si/N ratio, the positions of the access Si atoms, and H in and on the surface of the ELF have been investigated. It has been found that all defects, such as dangling bonds in the bulk and surfaces, increase the intensity of the ELF in the low energy range (below 10 eV). H in the bulk and on the surface has a healing effect, which can reduce the intensity of the loss peaks by saturating the dangling bonds. Electronic structure analysis has confirmed the origin of the changes in the ELF. It has demonstrated that the changes in ELF are not only affected by the composition but also by the microstructures of the materials. The results can be used to tailor the optical properties, in this case the ELF of Si-rich Si3N4, which is essential for secondary electron emission applications.Computation, Vol. 3, Pages 657-669: Optical Properties of Silicon-Rich Silicon Nitride (SixNyHz) from First Principles

The real and imaginary parts of the complex refractive index of SixNyHz have been calculated from first principles. Optical spectra for reflectivity, absorption coefficient, energy-loss function (ELF), and refractive index were obtained. The results for Si3N4 are in agreement with the available theoretical and experimental results. To understand the electron energy loss mechanism in Si-rich silicon nitride, the influence of the Si/N ratio, the positions of the access Si atoms, and H in and on the surface of the ELF have been investigated. It has been found that all defects, such as dangling bonds in the bulk and surfaces, increase the intensity of the ELF in the low energy range (below 10 eV). H in the bulk and on the surface has a healing effect, which can reduce the intensity of the loss peaks by saturating the dangling bonds. Electronic structure analysis has confirmed the origin of the changes in the ELF. It has demonstrated that the changes in ELF are not only affected by the composition but also by the microstructures of the materials. The results can be used to tailor the optical properties, in this case the ELF of Si-rich Si3N4, which is essential for secondary electron emission applications.

]]>Optical Properties of Silicon-Rich Silicon Nitride (SixNyHz) from First PrinciplesShu TaoAnne TheulingsVioleta ProdanovićJohn SmedleyHarry van der Graafdoi: 10.3390/computation3040657Computation2015-12-08Computation2015-12-0834Article65766910.3390/computation3040657http://www.mdpi.com/2079-3197/3/4/657Computation, Vol. 3, Pages 616-656: Assessment of Density-Functional Tight-Binding Ionization Potentials and Electron Affinities of Molecules of Interest for Organic Solar Cells Against First-Principles GW Calculationshttp://www.mdpi.com/2079-3197/3/4/616
Ionization potentials (IPs) and electron affinities (EAs) are important quantities input into most models for calculating the open-circuit voltage (Voc) of organic solar cells. We assess the semi-empirical density-functional tight-binding (DFTB) method with the third-order self-consistent charge (SCC) correction and the 3ob parameter set (the third-order DFTB (DFTB3) organic and biochemistry parameter set) against experiments (for smaller molecules) and against first-principles GW (Green’s function, G, times the screened potential, W) calculations (for larger molecules of interest in organic electronics) for the calculation of IPs and EAs. Since GW calculations are relatively new for molecules of this size, we have also taken care to validate these calculations against experiments. As expected, DFTB is found to behave very much like density-functional theory (DFT), but with some loss of accuracy in predicting IPs and EAs. For small molecules, the best results were found with ΔSCF (Δ self-consistent field) SCC-DFTB calculations for first IPs (good to ± 0.649 eV). When considering several IPs of the same molecule, it is convenient to use the negative of the orbital energies (which we refer to as Koopmans’ theorem (KT) IPs) as an indication of trends. Linear regression analysis shows that KT SCC-DFTB IPs are nearly as accurate as ΔSCF SCC-DFTB eigenvalues (± 0.852 eV for first IPs, but ± 0.706 eV for all of the IPs considered here) for small molecules. For larger molecules, SCC-DFTB was also the ideal choice with IP/EA errors of ± 0.489/0.740 eV from ΔSCF calculations and of ± 0.326/0.458 eV from (KT) orbital energies. Interestingly, the linear least squares fit for the KT IPs of the larger molecules also proves to have good predictive value for the lower energy KT IPs of smaller molecules, with significant deviations appearing only for IPs of 15–20 eV or larger. We believe that this quantitative analysis of errors in SCC-DFTB IPs and EAs may be of interest to other researchers interested in DFTB investigation of large and complex problems, such as those encountered in organic electronics.Computation, Vol. 3, Pages 616-656: Assessment of Density-Functional Tight-Binding Ionization Potentials and Electron Affinities of Molecules of Interest for Organic Solar Cells Against First-Principles GW Calculations

Ionization potentials (IPs) and electron affinities (EAs) are important quantities input into most models for calculating the open-circuit voltage (Voc) of organic solar cells. We assess the semi-empirical density-functional tight-binding (DFTB) method with the third-order self-consistent charge (SCC) correction and the 3ob parameter set (the third-order DFTB (DFTB3) organic and biochemistry parameter set) against experiments (for smaller molecules) and against first-principles GW (Green’s function, G, times the screened potential, W) calculations (for larger molecules of interest in organic electronics) for the calculation of IPs and EAs. Since GW calculations are relatively new for molecules of this size, we have also taken care to validate these calculations against experiments. As expected, DFTB is found to behave very much like density-functional theory (DFT), but with some loss of accuracy in predicting IPs and EAs. For small molecules, the best results were found with ΔSCF (Δ self-consistent field) SCC-DFTB calculations for first IPs (good to ± 0.649 eV). When considering several IPs of the same molecule, it is convenient to use the negative of the orbital energies (which we refer to as Koopmans’ theorem (KT) IPs) as an indication of trends. Linear regression analysis shows that KT SCC-DFTB IPs are nearly as accurate as ΔSCF SCC-DFTB eigenvalues (± 0.852 eV for first IPs, but ± 0.706 eV for all of the IPs considered here) for small molecules. For larger molecules, SCC-DFTB was also the ideal choice with IP/EA errors of ± 0.489/0.740 eV from ΔSCF calculations and of ± 0.326/0.458 eV from (KT) orbital energies. Interestingly, the linear least squares fit for the KT IPs of the larger molecules also proves to have good predictive value for the lower energy KT IPs of smaller molecules, with significant deviations appearing only for IPs of 15–20 eV or larger. We believe that this quantitative analysis of errors in SCC-DFTB IPs and EAs may be of interest to other researchers interested in DFTB investigation of large and complex problems, such as those encountered in organic electronics.