Computational study of the sodium-water reaction at the gas (water) - liquid (sodium) interface has been carried out using ab initio (first-principle) method. A possible reaction channel has been identified for the stepwise OH bond dissociations of a single water molecule. The energetics including the binding energy of a water molecule to the sodium surface, the activation energies of the bond cleavages, and the reaction energies, have been evaluated, and the rate constants of the first and second OH bond-breakings have been compared. The results are used as the basis for constructing the chemical reactionmodel used in a multi-dimensional sodium-water reaction code, SERAPHIM, being developed by JAEA toward the safety assessment of the steam generator (SG) in a sodium-cooled fast reactor (SFR). (author)

The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs

Chemical reactionpath calculations were used to model the minerals that might have formed at or near the Martian surface as a result of volcano or meteorite impact driven hydrothermal systems; weathering at the Martian surface during an early warm, wet climate; and near-zero or sub-zero C brine-regolith reactions in the current cold climate. Although the chemical reactionpath calculations carried out do not define the exact mineralogical evolution of the Martian surface over time, they do place valuable geochemical constraints on the types of minerals that formed from an aqueous phase under various surficial and geochemically complex conditions.

Hydrothermal processes are thought to have had significant roles in the development of surficial mineralogies and morphological features on Mars. For example, a significant proportion of the Martian soil could consist of the erosional products of hydrothermally altered impact melt sheets. In this model, impact-driven, vapor-dominated hydrothermal systems hydrothermally altered the surrounding rocks and transported volatiles such as S and Cl to the surface. Further support for impact-driven hydrothermal alteration on Mars was provided by studies of the Ries crater, Germany, where suevite deposits were extensively altered to montmorillonite clays by inferred low-temperature (100-130 C) hydrothermal fluids. It was also suggested that surface outflow from both impact-driven and volcano-driven hydrothermal systems could generate the valley networks, thereby eliminating the need for an early warm wet climate. We use computer-driven chemical reactionpath calculation to model chemical processes which were likely associated with postulated Martian hydrothermal systems.

W. C. Gardiner observed that achieving understanding through combustion modeling is limited by the ability to recognize the implications of what has been computed and to draw conclusions about the elementary steps underlying the reaction mechanism. This difficulty can be overcome in part by making better use of reactionpath analysis in the context of multidimensional flame simulations. Following a survey of current practice, an integral reaction flux is formulated in terms of conserved scalars that can be calculated in a fully automated way. Conditional analyses are then introduced, and a taxonomy for bidirectional path analysis is explored. Many examples illustrate the resulting path analysis and uncover some new results about nonpremixed methane-air laminar jets.

The project was aimed at demonstrating that the geothermometric predictions can be improved through the application of multi-element reactionpathmodeling that accounts for lithologic and tectonic settings, while also accounting for biological influences on geochemical temperature indicators. The limited utilization of chemical signatures by individual traditional geothermometer in the development of reservoir temperature estimates may have been constraining their reliability for evaluation of potential geothermal resources. This project, however, was intended to build a geothermometry tool which can integrate multi-component reactionpathmodeling with process-optimization capability that can be applied to dilute, low-temperature water samples to consistently predict reservoir temperature within ±30 °C. The project was also intended to evaluate the extent to which microbiological processes can modulate the geochemical signals in some thermal waters and influence the geothermometric predictions.

We report reactionpaths for two prototypical chemical reactions: Li + HF, an electron transfer reaction, and OH + H 2 , an abstraction reaction. In the first reaction we consider the connection between the energetic terms in the reactionpath Hamiltonian and the electronic changes which occur upon reaction. In the second reaction we consider the treatment of vibrational effects in chemical reactions in the reactionpath formalism. 30 refs., 9 figs

The CALPHAD (calculation of phase diagrams) method is used in combination with selected experimental investigations to derive reactionpaths in multicomponent systems. The method is illustrated by applying computerized thermodynamic databases and suitable software to explain quantitatively the thermal degradation of precursor-derived Si-C-N ceramics and the nitridation of titanium carbide. Reaction sequences in the Si 3 N 4 -SiC-TiC x N l-x -C-N system are illustrated by graphical representation of compatibility regions and indicated reactionpaths. From these results the experimentally known microstructure development of TiC reinforced Si 3 N 4 ceramics is explained and quantitative information is provided to optimize the microstructure of such materials. The concept of reactionpaths for the understanding of rapid solidification processes is shown by the example of AZ type Mg casting alloys. (orig.)

Oman Drilling Project hole BT1B drilled 300 meters through the basal thrust of the Samail ophiolite. The first 200 meters of this hole are dominated by listvenites (completely carbonated peridotites) and serpentinites. Below 200 meters the hole is mainly composed of metasediments and metavolcanics. This core provides a unique record of interaction between (a) mantle peridotite in the leading edge of the mantle wedge and (b) hydrous, CO2 rich fluids derived from subducting lithologies similar to those in the metamorphic sole. We used EQ3/6 to simulate a reactionpath in which hydrous fluid in equilibrium with qtz + calcite + feldspar + chlorite or smectite reacts with initially fresh peridotite at 100°C (the estimated temperature of alteration, Falk & Kelemen GCA 2015) and 5 kb. Water was first equilibrated with minerals observed during core description in the metamorphic sole at 100°C and 5kb. This fluid is then reacted with olivine enstatite and diopside (Mg#90) approximating the average composition of residual mantle peridotite (harzburgite) in Oman. Secondary minerals resulting from complete reaction are then reacted again with the initial fluid in an iterative process, up to water/rock > 1000. Water/rock close to 1 results in complete serpentinization of the peridotite, with chrysotile, brucite and magnetite as the only minerals. Water/rock >10 produces carbonates, chlorite and talc. Further increasing water/rock to > 100 produces assemblages dominated by carbonates and quartz with minor muscovite, similar to listvenites of hole BT1B that contain qtz + carbonates + Fe-oxyhydroxides + relict spinel ± chromian muscovite and fuchsite. The results of this preliminary model are consistent with the complex veining history of core from BT1B, with carbonate/iron oxide veins in both listvenites and serpentinites interpreted to be the earliest record of peridotite carbonation after initial serpentinization.

EQ6 is a FORTRAN computer program in the EQ3/6 software package (Wolery, 1979). It calculates reactionpaths (chemical evolution) in reacting water-rock and water-rock-waste systems. Speciation in aqueous solution is an integral part of these calculations. EQ6 computes models of titration processes (including fluid mixing), irreversible reaction in closed systems, irreversible reaction in some simple kinds of open systems, and heating or cooling processes, as well as solve ``single-point`` thermodynamic equilibrium problems. A reactionpath calculation normally involves a sequence of thermodynamic equilibrium calculations. Chemical evolution is driven by a set of irreversible reactions (i.e., reactions out of equilibrium) and/or changes in temperature and/or pressure. These irreversible reactions usually represent the dissolution or precipitation of minerals or other solids. The code computes the appearance and disappearance of phases in solubility equilibrium with the water. It finds the identities of these phases automatically. The user may specify which potential phases are allowed to form and which are not. There is an option to fix the fugacities of specified gas species, simulating contact with a large external reservoir. Rate laws for irreversible reactions may be either relative rates or actual rates. If any actual rates are used, the calculation has a time frame. Several forms for actual rate laws are programmed into the code. EQ6 is presently able to model both mineral dissolution and growth kinetics.

EQ6 is a FORTRAN computer program in the EQ3/6 software package (Wolery, 1979). It calculates reactionpaths (chemical evolution) in reacting water-rock and water-rock-waste systems. Speciation in aqueous solution is an integral part of these calculations. EQ6 computes models of titration processes (including fluid mixing), irreversible reaction in closed systems, irreversible reaction in some simple kinds of open systems, and heating or cooling processes, as well as solve ''single-point'' thermodynamic equilibrium problems. A reactionpath calculation normally involves a sequence of thermodynamic equilibrium calculations. Chemical evolution is driven by a set of irreversible reactions (i.e., reactions out of equilibrium) and/or changes in temperature and/or pressure. These irreversible reactions usually represent the dissolution or precipitation of minerals or other solids. The code computes the appearance and disappearance of phases in solubility equilibrium with the water. It finds the identities of these phases automatically. The user may specify which potential phases are allowed to form and which are not. There is an option to fix the fugacities of specified gas species, simulating contact with a large external reservoir. Rate laws for irreversible reactions may be either relative rates or actual rates. If any actual rates are used, the calculation has a time frame. Several forms for actual rate laws are programmed into the code. EQ6 is presently able to model both mineral dissolution and growth kinetics

A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

Petrological and geochemical observations of pegmatites in the Strange Lake pluton, Canada, have been combined with numerical simulations to improve our understanding of fluid-rock interaction in peralkaline granitic systems. In particular, they have made it possible to evaluate reactionpaths responsible for hydrothermal mobilization and mineralization of rare earth elements (REE) and Zr. The focus of the study was the B-Zone in the northwest of the pluton, which contains a pegmatite swarm and is the target of exploration for an economically exploitable REE deposit. Many of the pegmatites are mineralogically zoned into a border consisting of variably altered primary K-feldspar, arfvedsonite, quartz, and zirconosilicates, and a core rich in quartz, fluorite and exotic REE minerals. Textural relationships indicate that the primary silicate minerals in the pegmatites were leached and/or replaced during acidic alteration by K-, Fe- and Al-phyllosilicates, aegirine, hematite, fluorite and/or quartz, and that primary zirconosilicates (e.g., elpidite) were replaced by gittinsite and/or zircon. Reaction textures recording coupled dissolution of silicate minerals and crystallization of secondary REE-silicates indicate hydrothermal mobilization of the REE. The mobility of the light (L)REE was limited by the stability of REE-F-(CO2)-minerals (basnäsite-(Ce) and fluocerite-(Ce)), whereas zirconosilicates and secondary gadolinite-group minerals controlled the mobility of Zr and the heavy (H)REE. Hydrothermal fluorite and fluorite-fluocerite-(Ce) solid solutions are interpreted to indicate the former presence of F-bearing saline fluids in the pegmatites. Numerical simulations show that the mobilization of REE and Zr in saline HCl-HF-bearing fluids is controlled by pH, ligand activity and temperature. Mobilization of Zr is significant in both saline HF- and HCl-HF-bearing fluids at low temperature (250 °C). In contrast, the REE are mobilized by saline HCl-bearing fluids

EQ3/6 geochemical modeling code package was used to investigate the interaction of the Topopah Spring Tuff and J-13 water at high temperatures. EQ3/6 input parameters were obtained from the results of laboratory experiments using USW G-1 core and J-13 water. Laboratory experiments were run at 150 and 250{sup 0}C for 66 days using both wafer-size and crushed tuff. EQ3/6 modeling reproduced results of the 150{sup 0}C experiments except for a small increase in the concentration of potassium that occurs in the first few days of the experiments. At 250{sup 0}C, the EQ3/6 modeling reproduced the major water/rock reactions except for a small increase in potassium, similar to that noted above, and an overall increase in aluminum. The increase in potassium concentration cannot be explained at this time, but the increase in A1 concentration is believed to be caused by the lack of thermodynamic data in the EQ3/6 data base for dachiardite, a zeolite observed as a run product at 250{sup 0}C. The ability to reproduce the majority of the experimental rock/water interactions at 150{sup 0}C validates the use of EQ3/6 as a geochemical modeling tool that can be used to theoretically investigate physical/chemical environments in support of the Waste Package Task of NNWSI.

EQ3/6 geochemical modeling code package was used to investigate the interaction of the Topopah Spring Tuff and J-13 water at high temperatures. EQ3/6 input parameters were obtained from the results of laboratory experiments using USW G-1 core and J-13 water. Laboratory experiments were run at 150 and 250 0 C for 66 days using both wafer-size and crushed tuff. EQ3/6 modeling reproduced results of the 150 0 C experiments except for a small increase in the concentration of potassium that occurs in the first few days of the experiments. At 250 0 C, the EQ3/6 modeling reproduced the major water/rock reactions except for a small increase in potassium, similar to that noted above, and an overall increase in aluminum. The increase in potassium concentration cannot be explained at this time, but the increase in A1 concentration is believed to be caused by the lack of thermodynamic data in the EQ3/6 data base for dachiardite, a zeolite observed as a run product at 250 0 C. The ability to reproduce the majority of the experimental rock/water interactions at 150 0 C validates the use of EQ3/6 as a geochemical modeling tool that can be used to theoretically investigate physical/chemical environments in support of the Waste Package Task of NNWSI

Atom tunneling in the hydrogen atom transfer reaction of the 2,4,6-tri-tert-butylphenyl radical to 3,5-di-tert-butylneophyl, which has a short but strongly curved reactionpath, was investigated using instanton theory. We found the tunneling path to deviate qualitatively from the classical intrinsic reaction coordinate, the steepest-descent path in mass-weighted Cartesian coordinates. To perform that comparison, we implemented a new variant of the predictor-corrector algorithm for the calculation of the intrinsic reaction coordinate. We used the reaction force analysis method as a means to decompose the reaction barrier into structural and electronic components. Due to the narrow energy barrier, atom tunneling is important in the abovementioned reaction, even above room temperature. Our calculated rate constants between 350 K and 100 K agree well with experimental values. We found a H/D kinetic isotope effect of almost 10 6 at 100 K. Tunneling dominates the protium transfer below 400 K and the deuterium transfer below 300 K. We compared the lengths of the tunneling path and the classical path for the hydrogen atom transfer in the reaction HCl + Cl and quantified the corner cutting in this reaction. At low temperature, the tunneling path is about 40% shorter than the classical path.

Reliably predicting the evolution of mechanical and chemical properties of reservoir rocks is crucial for efficient exploitation of enhanced geothermal systems (EGS). For example, dissolution and precipitation of individual rock forming minerals often result in significant volume changes, affecting the hydraulic rock properties and chemical composition of fluid and solid phases. Reactive transport models are typically used to evaluate and predict the effect of the internal feedback of these processes. However, a quantitative evaluation of chemo-mechanical interaction in polycrystalline environments is elusive due to poorly constrained kinetic data of complex mineral reactions. In addition, experimentally derived reaction rates are generally faster than reaction rates determined from natural systems, likely a consequence of the experimental design: a) determining the rate of a single process only, e.g. the dissolution of a mineral, and b) using powdered sample materials and thus providing an unrealistically high reaction surface and at the same time eliminating the restrictions on element transport faced in-situ for fairly dense rocks. In reality, multiple reactions are coupled during the alteration of a polymineralic rocks in the presence of a fluid and the rate determining process of the overall reactions is often difficult to identify. We present results of bulk rock-water interaction experiments quantifying alteration reactions between pure water and a granodiorite sample. The rock sample was chosen for its homogenous texture, small and uniform grain size (˜0.5 mm in diameter), and absence of pre-existing alteration features. The primary minerals are plagioclase (plg - 58 vol.%), quartz (qtz - 21 vol.%), K-feldspar (Kfs - 17 vol.%), biotite (bio - 3 vol.%) and white mica (wm - 1 vol.%). Three sets of batch experiments were conducted at 200 ° C to evaluate the effect of reactive surface area and different fluid path ways using (I) powders of the bulk rock with

Background: choice stepping reaction time (CSRT) is a functional measure that has been shown to significantly discriminate older fallers from non-fallers. Objective: to investigate how physiological and cognitive factors mediate the association between CSRT performance and multiple falls by use of

A simple but often reasonably accurate dynamical model--a synthesis of the semiclassical perturbation (SCP) approximation of Miller and Smith and the infinite order sudden (IOS) approximation--has been shown previously to take an exceptionally simple form when applied to the reactionpath Hamiltonian derived by Miller, Handy, and Adams. This paper shows how this combined SCP-IOS reactionpathmodel can be used to provide a simple but comprehensive description of a variety of phenomena in the dynamics of polyatomic molecules

This chapter explains the business model concept and explores the reasons why “innovation” and “innovation in services” are no longer exclusively a technological issue. Rather, we highlight that business models are critical components at the centre of business innovation processes. We also attempt

Finding representative reaction pathways is important for understanding the mechanism of molecular processes. We propose a new approach for constructing reactionpaths based on mean first-passage times. This approach incorporates information about all possible reaction events as well as the effect of temperature. As an application of this method, we study representative pathways of excitation migration in a photosynthetic light-harvesting complex, photosystem I. The paths thus computed provide a complete, yet distilled, representation of the kinetic flow of excitation toward the reaction center, thereby succinctly characterizing the function of the system

The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reactionpath with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.

The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reactionpath with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.

The 'PATH' codes are used to design magnetic optics subsystems for neutral particle beam systems. They include a 2-1/2D and three 3-D space charge models, two of which have recently been added. This paper describes the 3-D models and reports on preliminary benchmark studies in which these models are checked for stability as the cloud size is varied and for consistency with each other. Differences between the models are investigated and the computer time requirements for running these models are established

and having three or more stages. The methods are applied to a process control of a multi-stage production process having 25 variables and one output variable. When moving along the process, variables change their roles. It is shown how the methods of pathmodeling can be applied to estimate variables...... be performed regarding the foreseeable output property y, and with respect to an admissible range of correcting actions for the parameters of the next stage. In this paper the basic principles of pathmodeling is presented. The mathematics is presented for processes having only one stage, having two stages...... of the next stage with the purpose of obtaining optimal or almost optimal quality of the output variable. An important aspect of the methods presented is the possibility of extensive graphic analysis of data that can provide the engineer with a detailed view of the multi-variate variation in data....

The method of predicting reactionpath, using THOR code, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using HL EoS. The code allows the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, calculated and discussed—pure Ammonium Nitrate and its based explosive ANFO, and Nitromethane—because their equivalence ratio is respectively lower, near and greater than the stoicheiometry. Predictions of reactionpath are in good correlation with experimental values, proving the validity of proposed method.

Here, we apply the harmonic Fourier beads (HFB) path optimization method to study chemical reactions involving covalent bond breaking and forming on quantum mechanical (QM) and hybrid QM∕molecular mechanical (QM∕MM) potential energy surfaces. To improve efficiency of the path optimization on such computationally demanding potentials, we combined HFB with conjugate gradient (CG) optimization. The combined CG-HFB method was used to study two biologically relevant reactions, namely, L- to D-alanine amino acid inversion and alcohol acylation by amides. The optimized paths revealed several unexpected reaction steps in the gas phase. For example, on the B3LYP∕6-31G(d,p) potential, we found that alanine inversion proceeded via previously unknown intermediates, 2-iminopropane-1,1-diol and 3-amino-3-methyloxiran-2-ol. The CG-HFB method accurately located transition states, aiding in the interpretation of complex reaction mechanisms. Thus, on the B3LYP∕6-31G(d,p) potential, the gas phase activation barriers for the inversion and acylation reactions were 50.5 and 39.9 kcal∕mol, respectively. These barriers determine the spontaneous loss of amino acid chirality and cleavage of peptide bonds in proteins. We conclude that the combined CG-HFB method further advances QM and QM∕MM studies of reaction mechanisms.

Thermodynamic modeling is performed to investigate the possible reactionpaths of sea water throughout the Lo'ihi seamount and the associated geochemical supplies of energy that can support autotrophic microbial communities.

Snowmelt from alpine catchments provides 70-80% of the American Southwest's water resources. Climate change threatens to alter the timing and duration of snowmelt in high elevation catchments, which may also impact the quantity and the quality of these water resources. Modelling of these systems provides a robust theoretical framework to process the information extracted from the sparse physical measurement available in these sites due to their remote locations. Mass-balance inverse geochemical models (via PHREEQC, developed by the USGS) were applied to two snowmelt-dominated catchments; Green Lake 4 (GL4) in the Rockies and Emerald Lake (EMD) in the Sierra Nevada. Both catchments primarily consist of granite and granodiorite with a similar bulk geochemistry. The inputs for the models were the initial (snowpack) and final (catchment output) hydrochemistry and a catchment-specific suite of mineral weathering reactions. Models were run for wet and dry snow years, for early and late time periods (defined hydrologically as 1/2 of the total volume for the year). Multiple model solutions were reduced to a representative suite of reactions by choosing the model solution with the fewest phases and least overall phase change. The dominant weathering reactions (those which contributed the most solutes) were plagioclase for GL4 and albite for EMD. Results for GL4 show overall more plagioclase weathering during the dry year (214.2g) than wet year (89.9g). Both wet and dry years show more weathering in the early time periods (63% and 56%, respectively). These results show that the snowpack and outlet are chemically more similar during wet years than dry years. A possible hypothesis to explain this difference is a change in contribution from subsurface storage; during the wet year the saturated catchment reduces contact with surface materials that would result in mineral weathering reactions by some combination of reduced infiltration and decreased subsurface transit time. By

Full Text Available Catalytic refining of bio-oil by reacting with olefin/alcohol over solid acids can convert bio-oil to oxygen-containing fuels. Reactivities of groups of compounds typically present in bio-oil with 1-octene (or 1-butanol were studied at 120 °C/3 h over Dowex50WX2, Amberlyst15, Amberlyst36, silica sulfuric acid (SSA and Cs2.5H0.5PW12O40 supported on K10 clay (Cs2.5/K10, 30 wt. %. These compounds include phenol, water, acetic acid, acetaldehyde, hydroxyacetone, d-glucose and 2-hydroxymethylfuran. Mechanisms for the overall conversions were proposed. Other olefins (1,7-octadiene, cyclohexene, and 2,4,4-trimethylpentene and alcohols (iso-butanol with different activities were also investigated. All the olefins and alcohols used were effective but produced varying product selectivities. A complex model bio-oil, synthesized by mixing all the above-stated model compounds, was refined under similar conditions to test the catalyst’s activity. SSA shows the highest hydrothermal stability. Cs2.5/K10 lost most of its activity. A global reaction pathway is outlined. Simultaneous and competing esterification, etherfication, acetal formation, hydration, isomerization and other equilibria were involved. Synergistic interactions among reactants and products were determined. Acid-catalyzed olefin hydration removed water and drove the esterification and acetal formation equilibria toward ester and acetal products.

We propose a Monte Carlo method to study the reactionpaths in nucleosynthesis during stellar evolution. Determination of reactionpaths is important to obtain the physical picture of stellar evolution. The combination of network calculation and our method gives us a better understanding of physical picture. We apply our method to the case of the helium shell flash model in the extremely metal poor star

Building on mathematical similarities between quantum mechanics and theories of diffusion-influenced reactions, we develop a general approach for computational modeling of diffusion-influenced reactions that is capable of capturing not only the classical Smoluchowski picture but also alternative theories, as is here exemplified by a volume reactivity model. In particular, we prove the path decomposition expansion of various Green's functions describing the irreversible and reversible reaction of an isolated pair of molecules. To this end, we exploit a connection between boundary value and interaction potential problems with δ - and δ'-function perturbation. We employ a known path-integral-based summation of a perturbation series to derive a number of exact identities relating propagators and survival probabilities satisfying different boundary conditions in a unified and systematic manner. Furthermore, we show how the path decomposition expansion represents the propagator as a product of three factors in the Laplace domain that correspond to quantities figuring prominently in stochastic spatially resolved simulation algorithms. This analysis will thus be useful for the interpretation of current and the design of future algorithms. Finally, we discuss the relation between the general approach and the theory of Brownian functionals and calculate the mean residence time for the case of irreversible and reversible reactions.

Since the first reported discovery of the Lost City hydrothermal system in 2001, it was recognized that seawater alteration of ultramafic rocks plays a key role in the composition of the coexisting vent fluids. The unusually high pH and high concentrations of H2 and CH4 provide compelling evidence for this. Here we report the chemistry of hydrothermal fluids sampled from two vent structures (Beehive: ∼90-116 °C, and M6: ∼75 °C) at Lost City in 2008 during cruise KNOX18RR using ROV Jason 2 and R/V Revelle assets. The vent fluid chemistry at both sites reveals considerable overlap in concentrations of dissolved gases (H2, CH4), trace elements (Cs, Rb, Li, B and Sr), and major elements (SO4, Ca, K, Na, Cl), including a surprising decrease in dissolved Cl, suggesting a common source fluid is feeding both sites. The absence of Mg and relatively high concentrations of Ca and sulfate suggest solubility control by serpentine-diopside-anhydrite, while trace alkali concentrations, especially Rb and Cs, are high, assuming a depleted mantle protolith. In both cases, but especially for Beehive vent fluid, the silica concentrations are well in excess of those expected for peridotite alteration and the coexistence of serpentine-brucite at all reasonable temperatures. However, both the measured pH and silica values are in better agreement with serpentine-diopside-tremolite-equilibria. Geochemical modeling demonstrates that reaction of plagioclase with serpentinized peridotite can shift the chemical system away from brucite and into the tremolite stability field. This is consistent with the complex intermingling of peridotite and gabbroic bodies commonly observed within the Atlantis Massif. We speculate the existence of such plagioclase bearing peridotite may also account for the highly enriched trace alkali (Cs, Rb) concentrations in the Lost City vent fluids. Additionally, reactive transport modeling taking explicit account of temperature dependent rates of mineral

The PP code is a graphics post-processor and plotting program for EQ6, a popular reaction-path code. PP runs on personal computers, allocates memory dynamically, and can handle very large reactionpath runs. Plots of simple variable groups, such as fluid and solid phase composition, can be obtained with as few as two keystrokes. Navigation through the list of reactionpath variables is simple and efficient. Graphics files can be exported for inclusion in word processing documents and spreadsheets, and experimental data may be imported and superposed on the reactionpath runs. The EQ6 thermodynamic database can be searched from within PP, to simplify interpretation of complex plots

Transition path sampling (TPS) was developed for studying activated processes in complex systems with unknown reaction coordinate. Transition interface sampling (TIS) allows efficient evaluation of the rate constants. However, when the transition can occur via more than one reaction channel

Path integration is a navigation strategy widely observed in nature where an animal maintains a running estimate, called the home vector, of its location during an excursion. Evidence suggests it is both ancient and ubiquitous in nature, and has been studied for over a century. In that time, canonical and neural network models have flourished, based on a wide range of assumptions, justifications and supporting data. Despite the importance of the phenomenon, consensus and unifying principles appear lacking. A fundamental issue is the neural representation of space needed for biological path integration. This paper presents a scheme to classify path integration systems on the basis of the way the home vector records and updates the spatial relationship between the animal and its home location. Four extended classes of coordinate systems are used to unify and review both canonical and neural network models of path integration, from the arthropod and mammalian literature. This scheme demonstrates analytical equivalence between models which may otherwise appear unrelated, and distinguishes between models which may superficially appear similar. A thorough analysis is carried out of the equational forms of important facets of path integration including updating, steering, searching and systematic errors, using each of the four coordinate systems. The type of available directional cue, namely allothetic or idiothetic, is also considered. It is shown that on balance, the class of home vectors which includes the geocentric Cartesian coordinate system, appears to be the most robust for biological systems. A key conclusion is that deducing computational structure from behavioural data alone will be difficult or impossible, at least in the absence of an analysis of random errors. Consequently it is likely that further theoretical insights into path integration will require an in-depth study of the effect of noise on the four classes of home vectors. Copyright 2009 Elsevier Ltd

A behavioural and a modelling framework are proposed for representing route choice from a path set that satisfies travellers’ spatiotemporal constraints. Within the proposed framework, travellers’ master sets are constructed by path generation, consideration sets are delimited according to spatio...

Full Text Available We propose to use a comprehensive pathmodel of vocal emotion communication, encompassing encoding, transmission, and decoding processes, to empirically model data sets on emotion expression and recognition. The utility of the approach is demonstrated for two data sets from two different cultures and languages, based on corpora of vocal emotion enactment by professional actors and emotion inference by naïve listeners. Lens model equations, hierarchical regression, and multivariate path analysis are used to compare the relative contributions of objectively measured acoustic cues in the enacted expressions and subjective voice cues as perceived by listeners to the variance in emotion inference from vocal expressions for four emotion families (fear, anger, happiness, and sadness. While the results confirm the central role of arousal in vocal emotion communication, the utility of applying an extended pathmodeling framework is demonstrated by the identification of unique combinations of distal cues and proximal percepts carrying information about specific emotion families, independent of arousal. The statistical models generated show that more sophisticated acoustic parameters need to be developed to explain the distal underpinnings of subjective voice quality percepts that account for much of the variance in emotion inference, in particular voice instability and roughness. The general approach advocated here, as well as the specific results, open up new research strategies for work in psychology (specifically emotion and social perception research and engineering and computer science (specifically research and development in the domain of affective computing, particularly on automatic emotion detection and synthetic emotion expression in avatars.

In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reactionpaths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reactionpaths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.

In the modeling of the reaction-transport process in GaN MOVPE growth, the selections of kinetic parameters (activation energy Ea and pre-exponential factor A) for gas reactions are quite uncertain, which cause uncertainties in both gas reactionpath and growth rate. In this study, numerical modeling of the reaction-transport process for GaN MOVPE growth in a vertical rotating disk reactor is conducted with varying kinetic parameters for main reactionpaths. By comparisons of the molar concentrations of major Ga-containing species and the growth rates, the effects of kinetic parameters on gas reactionpaths are determined. The results show that, depending on the values of the kinetic parameters, the gas reactionpath may be dominated either by adduct/amide formation path, or by TMG pyrolysis path, or by both. Although the reactionpath varies with different kinetic parameters, the predicted growth rates change only slightly because the total transport rate of Ga-containing species to the substrate changes slightly with reactionpaths. This explains why previous authors using different chemical models predicted growth rates close to the experiment values. By varying the pre-exponential factor for the amide trimerization, it is found that the more trimers are formed, the lower the growth rates are than the experimental value, which indicates that trimers are poor growth precursors, because of thermal diffusion effect caused by high temperature gradient. The effective order for the contribution of major species to growth rate is found as: pyrolysis species > amides > trimers. The study also shows that radical reactions have little effect on gas reactionpath because of the generation and depletion of H radicals in the chain reactions when NH2 is considered as the end species.

This review focuses on nuclear reactions in astrophysics and, more specifically, on reactions with light ions (nucleons and α particles) proceeding via the strong interaction. It is intended to present the basic definitions essential for studies in nuclear astrophysics, to point out the differences between nuclear reactions taking place in stars and in a terrestrial laboratory, and to illustrate some of the challenges to be faced in theoretical and experimental studies of those reactions. The discussion revolves around the relevant quantities for astrophysics, which are the astrophysical reaction rates. The sensitivity of the reaction rates to the uncertainties in the prediction of various nuclear properties is explored and some guidelines for experimentalists are also provided. (author)

The level of quality that food maintains as it travels down the production-to-consumption path is largely determined by the chemical, biochemical, physical, and microbiological changes that take place during its processing and storage. Kinetic Modeling of Reactions in Foods demonstrates how to

Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

This PhD thesis deals with the study of fundamental physics phenomena, with applications to nuclear materials of interest. We have developed methods for the study of rare events related to thermally activated structural transitions in many body systems. The first method involves the numerical simulation of the probability current associated with reactive paths. After deriving the evolution equations for the probability current, a Diffusion Monte Carlo algorithm is implemented in order to sample this current. This technique, called Transition Current Sampling was applied to the study of structural transitions in a cluster of 38 atoms with Lennard-Jones potential (LJ-38). A second algorithm, called Transition Path Sampling with local Lyapunov bias (LyTPS), was then developed. LyTPS calculates reaction rates at finite temperature by following the transition state theory. A statistical bias based on the maximum local Lyapunov exponents is introduced to accelerate the sampling of reactive trajectories. To extract the value of the equilibrium reaction constants obtained from LyTPS, we use the Multistate Bennett Acceptance Ratio. We again validate this method on the LJ-38 cluster. LyTPS is then used to calculate migration constants for vacancies and divacancies in the α-Iron, and the associated migration entropy. These constants are used as input parameter for codes modeling the kinetic evolution after irradiation (First Passage Kinetic Monte Carlo) to reproduce numerically resistivity recovery experiments in α-Iron. (author) [fr

Motivated by the study of rare events for a typical genetic switching model in systems biology, in this paper we aim to establish the general two-scale large deviations for chemical reaction systems. We build a formal approach to explicitly obtain the large deviation rate functionals for the considered two-scale processes based upon the second quantization path integral technique. We get three important types of large deviation results when the underlying two timescales are in three different regimes. This is realized by singular perturbation analysis to the rate functionals obtained by the path integral. We find that the three regimes possess the same deterministic mean-field limit but completely different chemical Langevin approximations. The obtained results are natural extensions of the classical large volume limit for chemical reactions. We also discuss its implication on the single-molecule Michaelis–Menten kinetics. Our framework and results can be applied to understand general multi-scale systems including diffusion processes. (paper)

The purpose of this project was to model the reaction calorimeter in order to calculate the heat of absorption which is the most important parameter in this work. Reaction calorimeter is an apparatus which is used in measuring the heat of absorption of CO2 as well as the total pressure in vapor phase based on vapor-liquid equilibrium state. Mixture of monoethanolamine (MEA) and water was used as a solvent to absorb the CO2.Project was divided in to three parts in order to make the programming...

This compact reference surveys the full range of available structural equation modeling (SEM) methodologies. It reviews applications in a broad range of disciplines, particularly in the social sciences where many key concepts are not directly observable. This is the first book to present SEM’s development in its proper historical context–essential to understanding the application, strengths and weaknesses of each particular method. This book also surveys the emerging path and network approaches that complement and enhance SEM, and that will grow in importance in the near future. SEM’s ability to accommodate unobservable theory constructs through latent variables is of significant importance to social scientists. Latent variable theory and application are comprehensively explained, and methods are presented for extending their power, including guidelines for data preparation, sample size calculation, and the special treatment of Likert scale data. Tables of software, methodologies and fit st...

Warschkow, O.; McKenzie, D. R. [Centre for Quantum Computation and Communication Technology, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia); Curson, N. J. [Centre for Quantum Computation and Communication Technology, School of Physics, The University of New South Wales, Sydney, NSW 2052 (Australia); London Centre for Nanotechnology and Department of Electronic and Electrical Engineering, University College London, 17-19 Gordon Street, London WC1H 0AH (United Kingdom); Schofield, S. R. [Centre for Quantum Computation and Communication Technology, School of Physics, The University of New South Wales, Sydney, NSW 2052 (Australia); London Centre for Nanotechnology and Department of Physics and Astronomy, University College, 17-19 Gordon Street, London WC1H 0AH (United Kingdom); Marks, N. A. [Centre for Quantum Computation and Communication Technology, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia); Discipline of Physics & Astronomy, Curtin University, GPO Box U1987, Perth, WA (Australia); Wilson, H. F. [Centre for Quantum Computation and Communication Technology, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia); CSIRO Virtual Nanoscience Laboratory, Parkville, VIC 3052 (Australia); School of Applied Sciences, RMIT University, Melbourne, VIC 3000 (Australia); Radny, M. W.; Smith, P. V. [School of Mathematical and Physical Sciences, The University of Newcastle, Callaghan, NSW 2308 (Australia); Reusch, T. C. G.; Simmons, M. Y. [Centre for Quantum Computation and Communication Technology, School of Physics, The University of New South Wales, Sydney, NSW 2052 (Australia)

2016-01-07

Using density functional theory and guided by extensive scanning tunneling microscopy (STM) image data, we formulate a detailed mechanism for the dissociation of phosphine (PH{sub 3}) molecules on the Si(001) surface at room temperature. We distinguish between a main sequence of dissociation that involves PH{sub 2}+H, PH+2H, and P+3H as observable intermediates, and a secondary sequence that gives rise to PH+H, P+2H, and isolated phosphorus adatoms. The latter sequence arises because PH{sub 2} fragments are surprisingly mobile on Si(001) and can diffuse away from the third hydrogen atom that makes up the PH{sub 3} stoichiometry. Our calculated activation energies describe the competition between diffusion and dissociation pathways and hence provide a comprehensive model for the numerous adsorbate species observed in STM experiments.

Using density functional theory and guided by extensive scanning tunneling microscopy (STM) image data, we formulate a detailed mechanism for the dissociation of phosphine (PH 3 ) molecules on the Si(001) surface at room temperature. We distinguish between a main sequence of dissociation that involves PH 2 +H, PH+2H, and P+3H as observable intermediates, and a secondary sequence that gives rise to PH+H, P+2H, and isolated phosphorus adatoms. The latter sequence arises because PH 2 fragments are surprisingly mobile on Si(001) and can diffuse away from the third hydrogen atom that makes up the PH 3 stoichiometry. Our calculated activation energies describe the competition between diffusion and dissociation pathways and hence provide a comprehensive model for the numerous adsorbate species observed in STM experiments

Full Text Available In a railroad system, train pathing is concerned with the assignment of trains to links and tracks, and train timetabling allocates time slots to trains. In this paper, we present an optimization heuristic to solve the train pathing and timetabling problem. This heuristic allows the dwell time of trains in a station or link to be dependent on the assigned tracks. It also allows the minimum clearance time between the trains to depend on their relative status. The heuristic generates a number of alternative paths for each train service in the initialization phase. Then it uses a neighborhood search approach to find good feasible combinations of these paths. A linear program is developed to evaluate the quality of each combination that is encountered. Numerical examples are provided.

The semiclassical approach for heavy ion reactions has become more and more important in analyzing rapidly accumulating data. The purpose of this paper is to lay a quantum-mechanical foundation of the conventional semiclassical treatments in heavy ion physics by using Feynman's path integral method on the basis of the second paper of Pechukas, and discuss simple consequences of the formalism.

The PyFrag program (released as PyFrag2007.01) is a "wrap-around" for the Amsterdam Density Functional (ADF) package and facilitates the extension of the fragment analysis method implemented in ADF along an entire potential energy surface. The purpose is to make analyses of reactionpaths and other

Full Text Available The author suggests mathematical model of pilot’s activity as follow up system and mathematical methods of pilot’s activity description. The main idea of the model is flight path forming and aircraft stabilization on it during instrument flight. Input of given follow up system is offered to be aircraft deflection from given path observed by pilot by means of sight and output is offered to be pilot’s regulating actions for aircraft stabilization on flight path.

We propose a generalization of the intrinsic reaction coordinate (IRC) for quantum many-body systems described in terms of the mass-weighted ring polymer centroids in the imaginary-time path integral theory. This novel kind of reaction coordinate, which may be called the ''centroid IRC,'' corresponds to the minimum free energy path connecting reactant and product states with a least amount of reversible work applied to the center of masses of the quantum nuclei, i.e., the centroids. We provide a numerical procedure to obtain the centroid IRC based on first principles by combining ab initio path integral simulation with the string method. This approach is applied to NH 3 molecule and N 2 H 5 - ion as well as their deuterated isotopomers to study the importance of nuclear quantum effects in the intramolecular and intermolecular proton transfer reactions. We find that, in the intramolecular proton transfer (inversion) of NH 3 , the free energy barrier for the centroid variables decreases with an amount of about 20% compared to the classical one at the room temperature. In the intermolecular proton transfer of N 2 H 5 - , the centroid IRC is largely deviated from the ''classical'' IRC, and the free energy barrier is reduced by the quantum effects even more drastically.

The mobility of redox sensitive nuclides is largely dependent on their valence state. The radionuclides that make the dominant contributions to final dose calculations are redox sensitive. Almost all the radionuclides (except 129 I) have higher mobility at high valence state, and correspond to immobilization at low valence state due to the much lower solubility. Pyrite is an ubiquitous and stable mineral in geological environment, and would be used as a low-cost long time reductant for the immobilization of radionuclides. However, pyrite oxidation is supposed to generate acid, which will enhance the mobility of nuclides. In this paper, the reactionpath of the reactions between radionuclides (U, Se and Tc) and pyrite in the groundwater from Wuyi well in Beishan area of China has been simulated using geochemical modeling software. According to the results, pyrite can reduce high valence nuclides to a dinky-level effectively, with the pH slightly increasing under anaerobic condition that is common in deep nuclear waste repositories. (authors)

Previously, we have studied the coordination and dissociation of hydrogen peroxide with iron(II) in aqueous solution by Car-Parrinello molecular dynamics at room temperature. We presented a few illustrative reaction events, in which the ferryl ion ([Fe(IV)O

A dynamical picture of phylogenetic evolution is given in terms of Markov models on a state space, comprising joint probability distributions for character types of taxonomic classes. Phylogenetic branching is a process which augments the number of taxa under consideration, and hence the rank of the underlying joint probability state tensor. We point out the combinatorial necessity for a second-quantized, or Fock space setting, incorporating discrete counting labels for taxa and character types, to allow for a description in the number basis. Rate operators describing both time evolution without branching, and also phylogenetic branching events, are identified. A detailed development of these ideas is given, using standard transcriptions from the microscopic formulation of non-equilibrium reaction-diffusion or birth-death processes. These give the relations between stochastic rate matrices, the matrix elements of the corresponding evolution operators representing them, and the integral kernels needed to implement these as path integrals. The 'free' theory (without branching) is solved, and the correct trilinear 'interaction' terms (representing branching events) are presented. The full model is developed in perturbation theory via the derivation of explicit Feynman rules which establish that the probabilities (pattern frequencies of leaf colourations) arising as matrix elements of the time evolution operator are identical with those computed via the standard analysis. Simple examples (phylogenetic trees with two or three leaves), are discussed in detail. Further implications for the work are briefly considered including the role of time reparametrization covariance.

A dynamical picture of phylogenetic evolution is given in terms of Markov models on a state space, comprising joint probability distributions for character types of taxonomic classes. Phylogenetic branching is a process which augments the number of taxa under consideration, and hence the rank of the underlying joint probability state tensor. We point out the combinatorial necessity for a second-quantized, or Fock space setting, incorporating discrete counting labels for taxa and character types, to allow for a description in the number basis. Rate operators describing both time evolution without branching, and also phylogenetic branching events, are identified. A detailed development of these ideas is given, using standard transcriptions from the microscopic formulation of non-equilibrium reaction-diffusion or birth-death processes. These give the relations between stochastic rate matrices, the matrix elements of the corresponding evolution operators representing them, and the integral kernels needed to implement these as path integrals. The 'free' theory (without branching) is solved, and the correct trilinear 'interaction' terms (representing branching events) are presented. The full model is developed in perturbation theory via the derivation of explicit Feynman rules which establish that the probabilities (pattern frequencies of leaf colourations) arising as matrix elements of the time evolution operator are identical with those computed via the standard analysis. Simple examples (phylogenetic trees with two or three leaves), are discussed in detail. Further implications for the work are briefly considered including the role of time reparametrization covariance

Equations are presented describing equilibrium in binary solid-solution aqueous-solution (SSAS) systems after a dissolution, precipitation, or recrystallization process, as a function of the composition and relative proportion of the initial phases. Equilibrium phase diagrams incorporating the concept of stoichiometric saturation are used to interpret possible reactionpaths and to demonstrate relations between stoichiometric saturation, primary saturation, and thermodynamic equilibrium states. The concept of stoichiometric saturation is found useful in interpreting and putting limits on dissolution pathways, but there currently is no basis for possible application of this concept to the prediction and/ or understanding of precipitation processes. Previously published dissolution experiments for (Ba, Sr)SO4 and (Sr, Ca)C??O3orth. solids are interpreted using equilibrium phase diagrams. These studies show that stoichiometric saturation can control, or at least influence, initial congruent dissolution pathways. The results for (Sr, Ca)CO3orth. solids reveal that stoichiometric saturation can also control the initial stages of incongruent dissolution, despite the intrinsic instability of some of the initial solids. In contrast, recrystallisation experiments in the highly soluble KCl-KBr-H2O system demonstrate equilibrium. The excess free energy of mixing calculated for K(Cl, Br) solids is closely modeled by the relation GE = ??KBr??KClRT[a0 + a1(2??KBr-1)], where a0 is 1.40 ?? 0.02, a1, is -0.08 ?? 0.03 at 25??C, and ??KBr and ??KCl are the mole fractions of KBr and KCl in the solids. The phase diagram constructed using this fit reveals an alyotropic maximum located at ??KBr = 0.676 and at a total solubility product, ???? = [K+]([Cl-] + [Br-]) = 15.35. ?? 1990.

Full Text Available Time-resolved electron diffraction with atomic-scale spatial and temporal resolution was used to unravel the transformation pathway in the photoinduced structural phase transition of vanadium dioxide. Results from bulk crystals and single-crystalline thin-films reveal a common, stepwise mechanism: First, there is a femtosecond V−V bond dilation within 300 fs, second, an intracell adjustment in picoseconds and, third, a nanoscale shear motion within tens of picoseconds. Experiments at different ambient temperatures and pump laser fluences reveal a temperature-dependent excitation threshold required to trigger the transitional reactionpath of the atomic motions.

An attempt is made to reconcile the different terminologies pertaining to reduction of chemical reactionmodels. The approaches considered include global modeling, response modeling, detailed reduction, chemical lumping, and statistical lumping. The advantages and drawbacks of each of these methods are pointed out.

With increasing computational capabilities, an ever growing amount of data is generated in computational chemistry that contains a vast amount of chemically relevant information. It is therefore imperative to create new computational tools in order to process and extract this data in a sensible way. Kudi is an open source library that aids in the extraction of chemical properties from reactionpaths. The straightforward structure of Kudi makes it easy to use for users and allows for effortless implementation of new capabilities, and extension to any quantum chemistry package. A use case for Kudi is shown for the tautomerization reaction of formic acid. Kudi is available free of charge at www.github.com/stvogt/kudi.

We compare the strategy found by the optimal control theory in a complex molecular system according to the active subspace coupled to the field. The model is the isomerization during a Cope rearrangement of Thiele’s ester that is the most stable dimer obtained by the dimerization of methyl-cyclopentadienenylcarboxylate. The crudest partitioning consists in retaining in the active space only the reaction coordinate, coupled to a dissipative bath of harmonic oscillators which are not coupled to the field. The control then fights against dissipation by accelerating the passage across the transition region which is very wide and flat in a Cope reaction. This mechanism has been observed in our previous simulations [Chenel et al., J. Phys. Chem. A 116, 11273 (2012)]. We compare here, the response of the control field when the reactionpath is coupled to a second active mode. Constraints on the integrated intensity and on the maximum amplitude of the fields are imposed limiting the control landscape. Then, optimum field from one-dimensional simulation cannot provide a very high yield. Better guess fields based on the two-dimensional model allow the control to exploit different mechanisms providing a high control yield. By coupling the reaction surface to a bath, we confirm the link between the robustness of the field against dissipation and the time spent in the delocalized states above the transition barrier

Partial least squares (PLS) pathmodeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

Changes in requirements may result in the increasing of product development project cost and lead time, therefore, it is important to understand how requirement changes propagate in the design of complex product systems and be able to select best options to guide design. Currently, a most approach for design change is lack of take the multi-disciplinary coupling relationships and the number of parameters into account integrally. A new design change model is presented to systematically analyze and search change propagation paths. Firstly, a PDS-Behavior-Structure-based design change model is established to describe requirement changes causing the design change propagation in behavior and structure domains. Secondly, a multi-disciplinary oriented behavior matrix is utilized to support change propagation analysis of complex product systems, and the interaction relationships of the matrix elements are used to obtain an initial set of change paths. Finally, a rough set-based propagation space reducing tool is developed to assist in narrowing change propagation paths by computing the importance of the design change parameters. The proposed new design change model and its associated tools have been demonstrated by the scheduling change propagation paths of high speed train's bogie to show its feasibility and effectiveness. This model is not only supportive to response quickly to diversified market requirements, but also helpful to satisfy customer requirements and reduce product development lead time. The proposed new design change model can be applied in a wide range of engineering systems design with improved efficiency.

Full Text:Various dynamical systems are organized as reaction networks, where the population size of one component affects the populations of all its neighbors. Such networks can be found in interstellar surface chemistry, cell biology, thin film growth and other systems. I cases where the populations of reactive species are large, the network can be modeled by rate equations which provide all reaction rates within mean field approximation. However, in small systems that are partitioned into sub-micron size, these populations strongly fluctuate. Under these conditions rate equations fail and the master equation is needed for modeling these reactions. However, the number of equations in the master equation grows exponentially with the number of reactive species, severely limiting its feasibility for complex networks. Here we present a method which dramatically reduces the number of equations, thus enabling the incorporation of the master equation in complex reaction networks. The method is examplified in the context of reaction network on dust grains. Its applicability for genetic networks will be discussed. 1. Efficient simulations of gas-grain chemistry in interstellar clouds. Azi Lipshtat and Ofer Biham, Phys. Rev. Lett. 93 (2004), 170601. 2. Modeling of negative autoregulated genetic networks in single cells. Azi Lipshtat, Hagai B. Perets, Nathalie Q. Balaban and Ofer Biham, Gene: evolutionary genomics (2004), In press

There are several new technological application fields of fast neutrons such as accelerator-driven incineration/ transmutation of the long-lived radioactive nuclear wastes (in particular transuranium nuclides) to short-lived or stable isotopes by secondary spallation neutrons produced by high-intensity, intermediate-energy, charged-particle beams, prolonged planetary space missions, shielding for particle accelerators. Especially, accelerator driven subcritical systems (ADS) can be used for fission energy production and /or nuclear waste transmutation as well as in the intermediate-energy accelerator driven neutron sources, ions and neutrons with energies beyond 20 MeV, the upper limit of exiting data files that produced for fusion and fission applications. In these systems, the neutron scattering cross sections and emission differential data are very important for reactor neutronics calculations. The transition rate calculation involves the introduction of the parameter of mean free path determines the mean free path of the nucleon in the nuclear matter. This parameter allows an increase in mean free path, with simulation of effect, which is not considered in the calculations, such as conservation of parity and angular momentum in intra nuclear transitions. In this study, we have investigated the multiple preequilibrium matrix element constant from internal transition for Uranium, Thorium, (n,xn) neutron emission spectra. The neutron-emission spectra produced by (n,xn) reactions on nuclei of some target (for spallation) have been calculated. In the calculations, we have used the geometry dependent hybrid model and the cascade exciton model including the effects of the preequilibrium. The pre-equilibrium direct effects have been examined by using full exciton model. All calculated results have been compared with the experimental data. The obtained results have been discussed and compared with the available experimental data and found agreement with each other

We discuss the generalization to curved spacetime of a path-integral formalism of quantum field theory based on the sum over paths first going forward in time in the presence of one external source from an in vacuum to a state defined on a hypersurface of constant time in the future, and then backwards in time in the presence of a different source to the same in vacuum. This closed-time-path formalism which generalizes the conventional method based on in-out vacuum persistence amplitudes yields real and causal effective actions, field equations, and expectation values. We apply this method to two problems in semiclassical cosmology. First we study the back reaction of particle production in a radiation-filled Bianchi type-I universe with a conformal scalar field. Unlike the in-out formalism which yields complex geometries the real and causal effective action here yields equations for real effective geometries, with more readily interpretable results. It also provides a clear identification of particle production as a dissipative process in semiclassical theories. In the second problem we calculate the vacuum expectation value of the stress-energy tensor for a nonconformal massive λphi 4 theory in a Robertson-Walker universe. This study serves to illustrate the use of Feynman diagrams and higher-loop calculations in this formalism. It also demonstrates the economy of this method in the calculation of expectation values over the mode-sum Bogolubov transformation methods ordinarily applied to matrix elements calculated in the conventional in-out approach

The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.

The purpose of this study is to determine critical environmental parameters of soil K availability and to quantify those contributors by using a proposed pathmodel. In this study, plot experiments were designed into different treatments, and soil samples were collected and further analyzed in laboratory to investigate soil properties influence on soil potassium forms (water soluble K, exchangeable K, non-exchangeable K). Furthermore, path analysis based on proposed pathmodel was carried out to evaluate the relationship between potassium forms and soil properties. Research findings were achieved as followings. Firstly, key direct factors were soil S, ratio of sodium-potassium (Na/K), the chemical index of alteration (CIA), Soil Organic Matter in soil solution (SOM), Na and total nitrogen in soil solution (TN), and key indirect factors were Carbonate (CO3), Mg, pH, Na, S, and SOM. Secondly, pathmodel can effectively determine direction and quantities of potassium status changes between Exchangeable potassium (eK), Non-exchangeable potassium (neK) and water-soluble potassium (wsK) under influences of specific environmental parameters. In reversible equilibrium state of , K balance state was inclined to be moved into β and χ directions in treatments of potassium shortage. However in reversible equilibrium of , K balance state was inclined to be moved into θ and λ directions in treatments of water shortage. Results showed that the proposed pathmodel was able to quantitatively disclose moving direction of K status and quantify its equilibrium threshold. It provided a theoretical and practical basis for scientific and effective fertilization in agricultural plants growth. PMID:24204659

Full Text Available The purpose of this study is to determine critical environmental parameters of soil K availability and to quantify those contributors by using a proposed pathmodel. In this study, plot experiments were designed into different treatments, and soil samples were collected and further analyzed in laboratory to investigate soil properties influence on soil potassium forms (water soluble K, exchangeable K, non-exchangeable K. Furthermore, path analysis based on proposed pathmodel was carried out to evaluate the relationship between potassium forms and soil properties. Research findings were achieved as followings. Firstly, key direct factors were soil S, ratio of sodium-potassium (Na/K, the chemical index of alteration (CIA, Soil Organic Matter in soil solution (SOM, Na and total nitrogen in soil solution (TN, and key indirect factors were Carbonate (CO3, Mg, pH, Na, S, and SOM. Secondly, pathmodel can effectively determine direction and quantities of potassium status changes between Exchangeable potassium (eK, Non-exchangeable potassium (neK and water-soluble potassium (wsK under influences of specific environmental parameters. In reversible equilibrium state of [Formula: see text], K balance state was inclined to be moved into β and χ directions in treatments of potassium shortage. However in reversible equilibrium of [Formula: see text], K balance state was inclined to be moved into θ and λ directions in treatments of water shortage. Results showed that the proposed pathmodel was able to quantitatively disclose moving direction of K status and quantify its equilibrium threshold. It provided a theoretical and practical basis for scientific and effective fertilization in agricultural plants growth.

Physiological studies of the human retina show the existence of at least two visual information processing channels, the magnocellular and the parvocellular ones. Both have different spatial, temporal and chromatic features. This paper focuses on the different spatial resolution of these two channels. We propose a neuromorphic model, so that they match the retina's physiology. Considering the Deutsch and Deutsch model (1992), we propose two configurations (one for each visual channel) of the connection between the retina's different cell layers. The responses of the proposed model have similar behaviour to those of the visual cells: each channel has an optimum response corresponding to a given stimulus size which decreases for larger or smaller stimuli. This size is bigger for the magno path than for the parvo path and, in the end, both channels produce a magnifying of the borders of a stimulus

Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…

Full Text Available Partial least squares (PLS pathmodeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

Interpolated Variational Transition State Theory with Multidimensional Tunneling contributions (IVTST/MT) has been applied to the reaction of C 2 H 6 + OH, and it yields rate constants that agree well with the available experimental information. The main disadvantage of this method is the difficulty of interpolating all required information from a few points along the reactionpath. A more recent alternative is Variational Transition State Theory with Multidimensional Tunneling and Interpolated Corrections (VTST/MT-IC, also called dual-level direct dynamics), in which the reaction-path properties are first determined at an economical (lower) level of theory and then open-quotes correctedclose quotes using more accurate information obtained at a higher level for a selected number of points on the reactionpath. The VTST/MT-IC method also allows for interpolation through die wider reaction swath when large-curvature tunneling occurs. In the present work we examine the affordability/accuracy tradeoff for several combinations of higher and lower levels for VTST/MT-IC reaction rate calculations on the C 2 H 6 + OH process. Various levels of theory (including NDDO-SRP and ab initio ROMP2, UQCISD, UQCISD(T), and UCCSD) have been employed for the electronic structure calculations. We also compare several semiclassical approaches implemented in the POLYRATE and MORATE programs for taking tunneling effects into account

By using the geometrical optics and physical optics method, the models of wedge plate interference optical path, Michelson interferometer and Mach Zehnder interferometer thus three different active interference pattern are built. The optical path difference (OPD) launched by different interference patterns, fringe spacing and contrast expression have been derived. The results show that far field interference peak intensity of the wedge plate interference is small, so the detection distance is limited, Michelson interferometer with low contrast affects the performance of detection system, Mach Zehnder interferometer has greater advantages in peak intensity, the variable range of interference fringe spacing and contrast ratio. The results of this study are useful for the theoretical research and practical application of laser active interference detection.

In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.

A microscopic nuclear reactionmodel is applied to neutron elastic and direct inelastic scatterings, and pre-equilibrium reaction. The JLM folding model is used with nuclear structure information calculated within the quasi-particle random phase approximation implemented with the Gogny D1S interaction. The folding model for direct inelastic scattering is extended to include rearrangement corrections stemming from both isoscalar and isovector density variations occurring during a transition. The quality of the predicted (n,n), (n,n{sup '}), (n,xn) and (n,n{sup '}γ) cross sections, as well as the generality of the present microscopic approach, shows that it is a powerful tool that can help improving nuclear reactions data quality. Short- and long-term perspectives are drawn to extend the present approach to more systems, to include missing reactions mechanisms, and to consistently treat both structure and reaction problems. (orig.)

Multiresponse modelling is a powerful tool for studying complex kinetics of reactions occurring in food products. This modelling technique uses information of reactants and products involved, allowing insightful kinetic parameters estimation and helping in clarifying reaction mechanisms. One example of a complex reaction that occurs in food processing is the caramelisation reaction. Caramelisation is the common name for a group of reactions observed when carbohydrates are exposed to high temp...

Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.

National Aeronautics and Space Administration — Develop path planning methods that incorporate an approximate model of ocean currents in path planning for a range of autonomous marine vehicles such as surface...

This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with A...

The potential impact of contrails and alterations in the lifetime of background cirrus due to subsonic airplane water and aerosol emissions has been investigated in a set of experiments using the GISS GCM connected to a q-flux ocean. Cirrus clouds at a height of 12-15km, with an optical thickness of 0.33, were input to the model "x" percentage of clear-sky occasions along subsonic aircraft flight paths, where x is varied from .05% to 6%. Two types of experiments were performed: one with the percentage cirrus cloud increase independent of flight density, as long as a certain minimum density was exceeded; the other with the percentage related to the density of fuel expenditure. The overall climate impact was similar with the two approaches, due to the feedbacks of the climate system. Fifty years were run for eight such experiments, with the following conclusions based on the stable results from years 30-50 for each. The experiments show that adding cirrus to the upper troposphere results in a stabilization of the atmosphere, which leads to some decrease in cloud cover at levels below the insertion altitude. Considering then the total effect on upper level cloud cover (above 5 km altitude), the equilibrium global mean temperature response shows that altering high level clouds by 1% changes the global mean temperature by 0.43C. The response is highly linear (linear correlation coefficient of 0.996) for high cloud cover changes between 0. 1% and 5%. The effect is amplified in the Northern Hemisphere, more so with greater cloud cover change. The temperature effect maximizes around 10 km (at greater than 40C warming with a 4.8% increase in upper level clouds), again more so with greater warming. The high cloud cover change shows the flight path influence most clearly with the smallest warming magnitudes; with greater warming, the model feedbacks introduce a strong tropical response. Similarly, the surface temperature response is dominated by the feedbacks, and shows

Feedback whistling is one of the severe problems with hearing aids, especially in dynamic situations when the users hug, pick up a telephone, etc. This paper investigates the properties of the dynamic feedback paths of digital hearing aids and proposes a model based on a reflection assumption...... gain. The method is also extended to dual-microphone hearing aids to assess the possibility of relating the two dynamic feedback paths through the reflection model. However, it is found that in a complicated acoustic environment, the relation between the two feedback paths can be very intricate...

We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

Highlights: • A no-arbitrage term structure model is applied to the electricity market. • Volatility parameters of the HJM model are estimated by using German data. • The model captures the seasonal price behaviour. • Electricity futures prices are forecasted. • Call options are evaluated according to different strike prices. - Abstract: The liberalization of electricity markets gave rise to new patterns of futures prices and the need of models that could efficiently describe price dynamics grew exponentially, in order to improve decision making for all of the agents involved in energy issues. Although there are papers focused on modelling electricity as a flow commodity by using Heath et al. (1992) approach in order to price futures contracts, the literature is scarce on attempts to consider a seasonal volatility as input to models. In this paper, we propose a futures price model that allows looking into observed stylized facts in the electricity market, in particular stochastic price variability, and periodic behavior. We consider a seasonal path-dependent volatility for futures returns that are modelled in Heath et al. (1992) framework and we obtain the dynamics of futures prices. We use these series to price the underlying asset of a call option in a risk management perspective. We test the model on the German electricity market, and we find that it is accurate in futures and option value estimates. In addition, the obtained results and the proposed methodology can be useful as a starting point for risk management or portfolio optimization under uncertainty in the current context of energy markets.

into consideration the effects of temperature, acidity, and the choice of the catalyst. Parameter estimation and uncertainty analysis were conducted on the kinetic model parameters using experimental data available in the literature. Finally, one factor at a time sensitivity analysis in the form of deviations......The pharmaceutical industry faces several challenges and barriers when implementing new or improving current pharmaceutical processes, such as competition from generic drug manufacturers and stricter regulations from the U.S. Food and Drug Administration and the European Medicine agency. The demand...... for efficient and reliable models to simulate and design/improve pharmaceutical processes is therefore increasing. For the case of ibuprofen, a well-known anti-inflammatory drug, the existing models do not include its complete synthesis path, usually referring only to one out of aset of different reactions...

We define the notion of an entity model for a special kind of document popular on the web: an article followed by a list of reactions on that article, usually by many authors, usually inverse chronologically ordered. We call these documents trigger-reactions pairs. The entity model describes which

The uv-laser absorption technique in a multipath cell (with excimer-laser photolysis for radical production) is used to investigate the rate constants of the reaction of OH with carbon monoxide. The pressure dependence and the influence of collision partners (measurements in pure oxygen up to one atmosphere) of this important atmospheric chemical reaction are determined. In the kinetic measurements detection limits of 10 7 OH cm -3 are reached with millisecond time resolution. Furthermore the application of the cw-Laser for stationary OH measurements (for example in smog chambers or the free troposphere) is described. The possibilities and limits of different detection methods are discussed with respect to of noise spectra. Modifications of the apparatus with a frequency modulation technique are presented, with an extrapolated detection limit of 10 5 OH cm -3 . (orig.) With 43 refs., 16 figs [de

Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters.

The Real Time Specification for Java (RTSJ) is an augmentation of Java for real time applications of various degrees of hardness. The central features of RTSJ are real time threads; user defined schedulers; asynchronous events, handlers, and control transfers; a priority inheritance based default scheduler; non-heap memory areas such as immortal and scoped, and non-heap real time threads whose execution is not impeded by garbage collection. The Robust Software Systems group at NASA Ames Research Center has JAVA PATHFINDER (JPF) under development, a Java model checker. JPF at its core is a state exploring JVM which can examine alternative paths in a Java program (e.g., via backtracking) by trying all nondeterministic choices, including thread scheduling order. This paper describes our implementation of an RTSJ profile (subset) in JPF, including requirements, design decisions, and current implementation status. Two examples are analyzed: jobs on a multiprogramming operating system, and a complex resource contention example involving autonomous vehicles crossing an intersection. The utility of JPF in finding logic and timing errors is illustrated, and the remaining challenges in supporting all of RTSJ are assessed.

Major observations have been formulated after reviewing test results for over 100 sodium-concrete reaction tests. The observations form the basis for developing a mechanistic model to predict the transient behavior of sodium-concrete reactions. The major observations are listed. Mechanisms associated with sodium and water transport to the reaction zone are identified, and represented by appropriate mathematical expressions. The model attempts to explain large-scale, long-term (100 h) test results were sodium-concrete reactions terminated even in the presence of unreacted sodium and concrete

Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.

The new computer code MEDICUS has been used to calculate cross sections of nuclear reactions. The code, implemented in MATLAB 6.5, Mathematica 5, and Fortran 95 programming languages, can be run in graphical and command line mode. Graphical User Interface (GUI) has been built that allows the user to perform calculations and to plot results just by mouse clicking. The MS Windows XP and Red Hat Linux platforms are supported. MEDICUS is a modern nuclear reaction code that can compute charged particle-, photon-, and neutron-induced reactions in the energy range from thresholds to about 200 MeV. The calculation of the cross sections of nuclear reactions are done in the framework of the Exact Many-Body Nuclear Cluster Model (EMBNCM), Direct Nuclear Reactions, Pre-equilibrium Reactions, Optical Model, DWBA, and Exciton Model with Cluster Emission. The code can be used also for the calculation of nuclear cluster structure of nuclei. We have calculated nuclear cluster models for some nuclei such as 177 Lu, 90 Y, and 27 Al. It has been found that nucleus 27 Al can be represented through the two different nuclear cluster models: 25 Mg + d and 24 Na + 3 He. Cross sections in function of energy for the reaction 27 Al( 3 He,x) 22 Na, established as a production method of 22 Na, are calculated by the code MEDICUS. Theoretical calculations of cross sections are in good agreement with experimental results. Reaction mechanisms are taken into account. (author)

To find the shortest collision-free path in a room containing obstacles we designed a chemical processor and coupled it with a cellular-automaton processor. In the chemical processor obstacles are represented by sites of high concentration of potassium iodide and a planar substrate is saturated with palladium chloride. Potassium iodide diffuses into the substrate and reacts with palladium chloride. A dark coloured precipitate of palladium iodide is formed almost everywhere except sites where two or more diffusion wavefronts collide. The less coloured sites are situated at the furthest distance from obstacles. Thus, the chemical processor develops a repulsive field, generated by obstacles. A snapshot of the chemical processor is inputted to a cellular automaton. The automaton behaves like a discrete excitable media; also, every cell of the automaton is supplied with a pointer that shows an origin of the cell's excitation. The excitation spreads along the cells corresponding to precipitate depleted sites of the chemical processor. When the destination-site is excited, waves travel on the lattice and update the orientations of the pointers. Thus, the automaton constructs a spanning tree, made of pointers, that guides a traveler towards the destination point. Thus, the automaton medium generates an attractive field and combination of this attractive field with the repulsive field, generated by the chemical processor, provides us with a solution of the collision-free path problem

The discussion of constituent models and large transverse momentum reactions includes the structure of hard scattering models, dimensional counting rules for large transverse momentum reactions, dimensional counting and exclusive processes, the deuteron form factor, applications to inclusive reactions, predictions for meson and photon beams, the charge-cubed test for the e/sup +-/p → e/sup +-/γX asymmetry, the quasi-elastic peak in inclusive hadronic reactions, correlations, and the multiplicity bump at large transverse momentum. Also covered are the partition method for bound state calculations, proofs of dimensional counting, minimal neutralization and quark--quark scattering, the development of the constituent interchange model, and the A dependence of high transverse momentum reactions

Reaction wheels are rotating devices used for the attitude control of spacecraft. However, reaction wheels also generate undesired disturbances in the form of vibrations, which may have an adverse effect on the pointing accuracy and stability of spacecraft (optical) payloads. A disturbance model for

The prospect of cement and concrete technologies depends on more in depth understanding of cement hydration reactions. Hydration reactionmodels simulate the development of the microstructures that can finally be used to estimate the cement based material properties that influence performance and

A new model of positronium (Ps) formation is proposed. Positronium is assumed to be formed by a reaction between a positron and an electron in the positron spur. Ps formation must compete with electron‐ion recombination and electron or positron reactions with solvent molecules and scavenger...

An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.

Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

PLS pathmodelling has previously been found to be robust to multicollinearity both between latent variables and between manifest variables of a common latent variable (see e.g. Cassel et al. (1999), Kristensen, Eskildsen (2005), Westlund et al. (2008)). However, most of the studies investigate...... models with relatively few variables and very simple dependence structures compared to the models that are often estimated in practical settings. A recent study by Nielsen et al. (2009) found that when model structure is more complex, PLS pathmodelling is not as robust to multicollinearity between...... latent variables as previously assumed. A difference in the standard error of path coefficients of as much as 83% was found between moderate and severe levels of multicollinearity. Large differences were found not only for large path coefficients, but also for small path coefficients and in some cases...

This work presents a very accurate experimental method based on radioactive beams for the study of the spectroscopical properties of unbound states. It makes use of inverse kinematical elastic scattering of the ions of an radioactive beam from a target of stable nuclei. An application of the method for the study of radioactive nuclei of astrophysical interests is given, namely of 19 Ne and 16 F nuclei. It is shown that on the basis of the properties of proton-emitting unbound levels of 19 Ne one can develop a method of experimental study of nova explosions. It is based on observation of gamma emissions following the gamma decays of the radionuclides generated in the explosion. The most interesting radioactive nucleus involved in this process is 18 F the yield of which depends strongly on the rate of 18 F(p,α) 15 O reaction. This yield depends in turn of the properties of the states of the ( 18 F + p) compound nucleus, i.e. the 19 Ne nucleus. In addition it was studied the unbound 16 F nucleus also of astrophysical significance in 15 O rich environment. Since 16 F is an unbound nucleus the reaction of 15 O with protons, although abundant in most astrophysical media, appears to be negligible. Thus the question that was posed was whether the exotic 15 O(p,β + ) 16 O resonant reaction acquires some importance in various astrophysical media. In this work one describes a novel approach to study the reaction mechanisms which could change drastically the role of non-bound nuclei in stellar processes. One implies this mechanism to the processes (p,γ)(β) + and (p,γ) (p,γ) within 15 O rich media. The experimental studies of the 19 Ne and 16 F were carried out with a radioactive beam of 15 O ions of very low energy produced by SPIRAL at GANIL. To improve the energy resolution thin targets were used with a 0 angle of observation relative to the beam direction. There are stressed the advantages of this approach and one gives details concerning the method of separation of

A recent work by Rosenberg on cluster states in reaction theory is reexamined and generalized to include energies above the threshold for breakup into four composite fragments. The problem of elastic scattering between two interacting composite fragments is reduced to an equivalent two-particle problem with an effective potential to be determined by extremum principles. For energies above the threshold for breakup into three or four composite fragments effective few-particle potentials are introduced and the problem is reduced to effective three- and four-particle problems. The equivalent three-particle equation contains effective two- and three-particle potentials. The effective potential in the equivalent four-particle equation has two-, three-, and four-body connected parts and a piece which has two independent two-body connected parts. In the equivalent three-particle problem we show how to include the effect of a weak three-body potential perturbatively. In the equivalent four-body problem an approximate simple calculational scheme is given when one neglects the four-particle potential the effect of which is presumably very small

Trimolecular reactionmodels are investigated in the compartment-based (lattice-based) framework for stochastic reaction-diffusion modeling. The formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution under periodic boundary conditions. For the case of reflecting boundary conditions, similar formulae are obtained using a computer-assisted approach. The accuracy of these formulae is further verified through comparison with numerical results. The presented derivation is based on the first passage time analysis of Montroll [J. Math. Phys. 10, 753 (1969)]. Montroll's results for two-dimensional lattice-based random walks are adapted and applied to compartment-based models of trimolecular reactions, which are studied in one-dimensional or pseudo one-dimensional domains.

Described is a simple device, that uses a laser beam to simulate P waves. It allows students to follow ray paths, reflections and refractions within the earth. Included is a set of exercises that lead students through the steps by which the presence of the outer and inner cores can be recognized. (Author/CW)

Full Text Available This study was performed to determine some physiological traits that affect soybean,s grain yield via sequential path analysis. In a factorial experiment, two cultivars (Harcor and Williams were sown under four levels of nitrogen and two levels of weed management at the research station of Tabriz University, Iran, during 2004 and 2005. Grain yield, some yield components and physiological traits were measured. Correlation coefficient analysis showed that grain yield had significant positive and negative association with measured traits. A sequential path analysis was done in order to evaluate associations among grain yield and related traits by ordering the various variables in first, second and third order paths on the basis of their maximum direct effects and minimal collinearity. Two first-order variables, namely number of pods per plant and pre-flowering net photosynthesis revealed highest direct effect on total grain yield and explained 49, 44 and 47 % of the variation in grain yield based on 2004, 2005, and combined datasets, respectively. Four traits i.e. post-flowering net photosynthesis, plant height, leaf area index and intercepted radiation at the bottom layer of canopy were found to fit as second-order variables. Pre- and post-flowering chlorophyll content, main root length and intercepted radiation at the middle layer of canopy were placed at the third-order path. From the results concluded that, number of pods per plant and pre-flowering net photosynthesis are the best selection criteria in soybean for grain yield.

environments given a specific type of hearing aids. Based on this observation, a feedback pathmodel that consists of an invariant model and a variant model is proposed. A common-acoustical-pole and zero model-based approach and an iterative least-square search-based approach are used to extract the invariant...... model from a set of impulse responses of the feedback paths. A hybrid approach combining the two methods is also proposed. The general properties of the three methods are studied using artificial datasets, and the methods are cross-validated using the measured feedback paths. The results show...

Full Text Available Many studies have evaluated how the characteristics of feedback receiver, feedback deliverer and feedback information influence psychological feedback reactions of the feedback receiver while largely neglecting that feedback intervention is a kind of social interaction process. To address this issue, this study proposes that employees’ perceived insider status (PIS, as a kind of employee-organization relationship, could also influence employees’ reactions to supervisory feedback. In particular, this study investigates the influence of PIS focusing on affective and cognitive feedback reactions, namely feedback satisfaction and feedback utility. Surveys were conducted in a machinery manufacturing company in the Guangdong province of China. Samples were collected from 192 employees. Data analysis demonstrated that PIS and feedback utility possessed a U-shaped relationship, whereas PIS and feedback satisfaction exhibited positively linear relationships. The analysis identified two kinds of mediating mechanisms related to feedback satisfaction and feedback utility. Internal feedback motivation attribution partially mediated the relationship between PIS and feedback satisfaction but failed to do the same with respect to the relationship between PIS and feedback utility. In contrast, external feedback motivation attribution partially mediated the relationship between PIS and feedback utility while failing to mediate the relationship between PIS and feedback satisfaction. Theoretical contributions and practical implications of the findings are discussed at the end of the paper.

This paper explores different nonlinear control schemes, applied to a simple modelreaction. The model is the Salnikov model, consisting of two ordinary differential equations. The control strategies investigated are I/O-linearisation, Exact linearisation, exact linearisation combined with LQR...

The interface structures between SiC and metal are reviewed at SiC/metal systems. Metal groups are divided to carbide forming metals and non-carbide forming metals. Carbide forming metals form metal carbide granular or zone at metal side, and metal silicide zone at SiC side. The further diffusion of Si and C from SiC causes the formation of T ternary phase depending metal. Non-carbide forming metals form silicide zone containing graphite or the layered structure of metal silicide and metal silicide containing graphite. The diffusion path between SiC and metal are formed along tie-lines connecting SiC and metal on the corresponding ternary Si-C-M system. The reactivity of metals is dominated by the forming ability of carbide or silicide. Te reactivity tendency of elements are discussed on the periodical table of elements, and Ti among elements shows the highest reactivity among carbide forming metals. For non-carbide forming metals the reactivity sequence of metals is Fe>Ni>Co. (orig.)

This fourth edition introduces multiple-latent variable models by utilizing path diagrams to explain the underlying relationships in the models. The book is intended for advanced students and researchers in the areas of social, educational, clinical, ind

Chemical reactions are involved at many stages of the drug design process. This starts with the analysis of biochemical pathways that are controlled by enzymes that might be downregulated in certain diseases. In the lead discovery and lead optimization process compounds have to be synthesized in order to test them for their biological activity. And finally, the metabolism of a drug has to be established. A better understanding of chemical reactions could strongly help in making the drug design process more efficient. We have developed methods for quantifying the concepts an organic chemist is using in rationalizing reaction mechanisms. These methods allow a comprehensive modeling of chemical reactivity and thus are applicable to a wide variety of chemical reactions, from gas phase reactions to biochemical pathways. They are empirical in nature and therefore allow the rapid processing of large sets of structures and reactions. We will show here how methods have been developed for the prediction of acidity values and of the regioselectivity in organic reactions, for designing the synthesis of organic molecules and of combinatorial libraries, and for furthering our understanding of enzyme-catalyzed reactions and of the metabolism of drugs.

Mathematical modeling is an indispensable tool for research and development in biotechnology and bioengineering. The formulation of kinetic models of biochemical networks depends on knowledge of the kinetic properties of the enzymes of the individual reactions. However, kinetic data acquired from experimental observations bring along uncertainties due to various experimental conditions and measurement methods. In this contribution, we propose a novel way to model the uncertainty in the enzyme kinetics and to predict quantitatively the responses of metabolic reactions to the changes in enzyme activities under uncertainty. The proposed methodology accounts explicitly for mechanistic properties of enzymes and physico-chemical and thermodynamic constraints, and is based on formalism from systems theory and metabolic control analysis. We achieve this by observing that kinetic responses of metabolic reactions depend: (i) on the distribution of the enzymes among their free form and all reactive states; (ii) on the equilibrium displacements of the overall reaction and that of the individual enzymatic steps; and (iii) on the net fluxes through the enzyme. Relying on this observation, we develop a novel, efficient Monte Carlo sampling procedure to generate all states within a metabolic reaction that satisfy imposed constrains. Thus, we derive the statistics of the expected responses of the metabolic reactions to changes in enzyme levels and activities, in the levels of metabolites, and in the values of the kinetic parameters. We present aspects of the proposed framework through an example of the fundamental three-step reversible enzymatic reaction mechanism. We demonstrate that the equilibrium displacements of the individual enzymatic steps have an important influence on kinetic responses of the enzyme. Furthermore, we derive the conditions that must be satisfied by a reversible three-step enzymatic reaction operating far away from the equilibrium in order to respond to

There exists a natural metric w.r.t. which the density dependent diffusion operator is harmonic in the sense of Eells and Sampson. A physical corollary of this statement is the property that any two regular points on the orbit of a reaction or diffusion operator can be connected by a path along which the reaction rate is constant. (author)

Full Text Available This paper establishes the kinematic model of the automatic parking system and analyzes the kinematic constraints of the vehicle. Furthermore, it solves the problem where the traditional automatic parking system model fails to take into account the time delay. Firstly, based on simulating calculation, the influence of time delay on the dynamic trajectory of a vehicle in the automatic parking system is analyzed under the transverse distance Dlateral between different target spaces. Secondly, on the basis of cloud model, this paper utilizes the tracking control of an intelligent path closer to human intelligent behavior to further study the Cloud Generator-based parking path tracking control method and construct a vehicle path tracking control model. Moreover, tracking and steering control effects of the model are verified through simulation analysis. Finally, the effectiveness and timeliness of automatic parking controller in the aspect of path tracking are tested through a real vehicle experiment.

Periodic density functional theory is used to study the dehydrogenation of formaldehyde (CH(2)O) on the Ag(111) surface and in the presence of adsorbed oxygen or hydroxyl species. Thermodynamic and kinetic parameters of elementary surface reactions have been determined. The dehydrogenation of CH(2)O on clean Ag(111) is thermodynamically and kinetically unfavorable. In particular, the activation energy for the first C-H bond scission of adsorbed CH(2)O (25.8 kcal mol(-1)) greatly exceeds the desorption energy for molecular CH(2)O (2.5 kcal mol(-1)). Surface oxygen promotes the destruction of CH(2)O through the formation of CH(2)O(2), which readily decomposes to CHO(2) and then in turn to CO(2) and adsorbed hydrogen. Analysis of site selectivity shows that CH(2)O(2), CHO(2), and CHO are strongly bound to the surface through the bridge sites, whereas CO and CO(2) are weakly adsorbed with no strong preference for a particular surface site. Dissociation of CO and CO(2) on the Ag(111) surface is highly activated and therefore unfavorable with respect to their molecular desorption.

We have discovered a new and highly competitive product channel in the unimolecular decay process for small Criegee intermediates, CH{sub 2}OO and anti/syn-CH{sub 3}C(H)OO, occurring by intramolecular insertion reactions via a roaming-like transition state (TS) based on quantum-chemical calculations. Our results show that in the decomposition of CH{sub 2}OO and anti-CH{sub 3}C(H)OO, the predominant paths directly produce cis-HC(O)OH and syn-CH{sub 3}C(O)OH acids with >110 kcal/mol exothermicities via loose roaming-like insertion TSs involving the terminal O atom and the neighboring C–H bonds. For syn-CH{sub 3}C(H)OO, the major decomposition channel occurs by abstraction of a H atom from the CH{sub 3} group by the terminal O atom producing CH{sub 2}C(H)O–OH. At 298 K, the intramolecular insertion process in CH{sub 2}OO was found to be 600 times faster than the commonly assumed ring-closing reaction.

We have discovered a new and highly competitive product channel in the unimolecular decay process for small Criegee intermediates, CH 2 OO and anti/syn-CH 3 C(H)OO, occurring by intramolecular insertion reactions via a roaming-like transition state (TS) based on quantum-chemical calculations. Our results show that in the decomposition of CH 2 OO and anti-CH 3 C(H)OO, the predominant paths directly produce cis-HC(O)OH and syn-CH 3 C(O)OH acids with >110 kcal/mol exothermicities via loose roaming-like insertion TSs involving the terminal O atom and the neighboring C–H bonds. For syn-CH 3 C(H)OO, the major decomposition channel occurs by abstraction of a H atom from the CH 3 group by the terminal O atom producing CH 2 C(H)O–OH. At 298 K, the intramolecular insertion process in CH 2 OO was found to be 600 times faster than the commonly assumed ring-closing reaction

on a commercial CoMo catalyst, and a simple kinetic model is presented. Hydrogenation of fused aromatic rings are known to be fast, and it is possible, that the reaction rates are limited by either internal or external mass transfer. An experiment conducted at industrial temperatures and pressure, using...... naphthalene as a model compound, have shown, that intra-particle diffusion resistance are likely to limit the reaction rate. In order to produce ULSD it is necessary to remove sulfur from some of the most refrac- tive sulfur compounds, such as sterically hindered dibenzothiophenes. Basic nitrogen com- pounds...... are known to inhibit certain hydrotreating reactions. Experimental results are pre- sented, showing the effect of 3 different nitrogen compounds, acridine, 1,4-dimethylcarabazole and 3-methylindole, on the hydrodesulfurization of a real feed and of a model compound, 4,6-dimethyldibenzothiophene. It is shown...

The effect of heating rates of Ni(V)/Al NanoFoils{sup ®} was investigated with transmission electron microscopy (TEM). The Ni(V)/Al were subjected to heating by using differential scanning calorimetry (DSC), in-situ TEM or electric pulse. Local chemical analysis was carried out using energy dispersive X-ray spectroscopy (EDS). Phase analysis was done with X-ray diffractions (XRD) and selected area electron diffractions (SAED). The experiments showed that slow heating in DSC results in development of separate exothermic effects at ∼230 °C, ∼280 °C and ∼390 °C, corresponding to precipitation of Al{sub 3}Ni, Al{sub 3}Ni{sub 2} and NiAl phases, respectively, i.e. like in vanadium free Ni/Al multilayers. Further heating to 700 °C allowed to obtain a single phase NiAl foil. The average grain size (g.s.) of NiAl phase produced in the DSC heat treated foil was comparable with the Ni(V)/Al multilayer period (∼50 nm), whereas in the case of reaction initiated with electric pulse the g.s. was in the micrometer range. Upon slow heating vanadium tends to segregate to zones parallel to the original multilayer internal interfaces, while in SHS process vanadium-rich phases precipitates at grain boundaries of the NiAl phase. - Highlights: • Peaks in DSC heating of Ni(V)/Al were explained by in-situ TEM observations. • Nucleation of Al{sub 3}Ni, Al{sub 3}Ni{sub 2} and NiAl at slow heating of Ni(V)/Al was documented. • Near surface NiAl obtained from NanoFoil show Ag precipitates at grain boundaries.

Musculoskeletal models of the cervical spine commonly represent neck muscles with straight paths. However, straight lines do not best represent the natural curvature of muscle paths in the neck, because the paths are constrained by bone and soft tissue. The purpose of this study was to estimate moment arms of curved and straight neck muscle paths using different moment arm calculation methods: tendon excursion, geometric, and effective torque. Curved and straight muscle paths were defined for two subject-specific cervical spine models derived from in vivo magnetic resonance images (MRI). Modeling neck muscle paths with curvature provides significantly different moment arm estimates than straight paths for 10 of 15 neck muscles (p straight lines to model muscle paths can lead to overestimating neck extension moment. However, moment arm methods for curved paths should be investigated further, as different methods of calculating moment arm can provide different estimates.

Under the main theme “prediction-oriented modeling in business research by means of partial least squares path modeling” (PLS), the special issue presents 17 papers. Most contributions include content from presentations at the 2nd International Symposium on Partial Least Squares PathModeling: The

Full Text Available This paper proposes a method to predict line-of-sight (LOS path loss in buildings. We performed measurements in two different type of buildings at a frequency of 1.8 GHz and propose new upper and lower bounds path loss models which depend on max and min values of sample path loss data. This makes our models limit path loss within the boundary lines. The models include time-variant effects such as people moving and cars in parking areas with their influence on wave propagation that is very high. The results have shown that the proposed models will be useful for the system and cell design of indoor wireless communication systems.

We focus on a reaction-diffusion approach proposed recently for experiments on combustion processes, where the heat released by combustion follows first-order reaction kinetics. This case allows us to perform an exhaustive analytical study. Specifically, we obtain the exact expressions for the speed of the thermal pulses, their maximum temperature and the condition of self-sustenance. Finally, we propose two generalizations of the model, namely, the case of several reactants burning together, and that of time-delayed heat conduction. We find an excellent agreement between our analytical results and simulations

We focus on a reaction-diffusion approach proposed recently for experiments on combustion processes, where the heat released by combustion follows first-order reaction kinetics. This case allows us to perform an exhaustive analytical study. Specifically, we obtain the exact expressions for the speed of the thermal pulses, their maximum temperature and the condition of self-sustenance. Finally, we propose two generalizations of the model, namely, the case of several reactants burning together, and that of time-delayed heat conduction. We find an excellent agreement between our analytical results and simulations.

The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral

Full Text Available Spaceborne synthetic aperture radar (SAR measurements of the EarthÃ¢Â€Â™s surface depend on electromagnetic waves that are subject to atmospheric path delays, in turn affecting geolocation accuracy. The atmosphere influences radar signal propagation by modifying its velocity and direction, effects which can be modeled. We use TerraSAR-X (TSX data to investigate improvements in the knowledge of the scene geometry. To precisely estimate atmospheric path delays, we analyse the signal return of four corner reflectors with accurately surveyed positions (based on differential GPS, placed at different altitudes yet with nearly identical slant ranges to the sensor. The comparison of multiple measurements with path delay models under these geometric conditions also makes it possible to evaluate the corrections for the atmospheric path delay made by the TerraSAR processor and to propose possible improvements.

Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

PLS PathModeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS PathModeling which known as Multiblock PLS PathModeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.

Full Text Available We have extended the micromechanics-based analytical (M-A model to make it capable of simulating Nuozhadu rockfill material (NRFM under different stress paths. Two types of drained triaxial tests on NRFM were conducted, namely, the stress paths of constant stress ratio (CSR and the complex stress paths with transitional features. The model was improved by considering the interparticle parameter variation with the unloading-reloading cycles and the effect of the stress transition path. The evolution of local dilatancy at interparticle planes due to an externally applied load is also discussed. Compared with Duncan-Chang’s E-u and E-B models, the improved model could not only better describe the deformation properties of NRFM under the stress path loading, but also present the volumetric strain changing from dilatancy to contractancy with increasing transitional confining pressures. All simulations have demonstrated that the proposed M-A model is capable of modelling the mechanical behaviour of NRFM in the dam.

A model has been devised to calculate shock Hugoniots and release paths off the Hugoniots for multicomponent rocks containing silicate, carbonate, and water. Hugoniot equations of state are constructed from relatively simple measurements of rock properties including bulk density, grain density of the silicate component, and weight fractions of water and carbonate. Release paths off the composite Hugoniot are calculated by mixing release paths off the component Hugoniots according to their weight fractions. If the shock imparts sufficient energy to the component to cause vaporization, a gas equation of state is used to calculate the release paths. For less energetic shocks, the rock component will unload like a solid or liquid, taking into account the irreversible removal of air-filled porosity

For qualitative modeling and analysis, a general qualitative abstraction of power transmission variables (flow and effort) for elements of flow paths includes information on resistance, net flow, permissible directions of flow, and qualitative potential is discussed. Each type of component model has flow-related variables and an associated internal flow map, connected into an overall flow network of the system. For storage devices, the implicit power transfer to the environment is represented by "virtual" circuits that include an environmental junction. A heterogeneous aggregation method simplifies the path structure. A method determines global flow-path changes during dynamic simulation and analysis, and identifies corresponding local flow state changes that are effects of global configuration changes. Flow-path determination is triggered by any change in a flow-related device variable in a simulation or analysis. Components (path elements) that may be affected are identified, and flow-related attributes favoring flow in the two possible directions are collected for each of them. Next, flow-related attributes are determined for each affected path element, based on possibly conflicting indications of flow direction. Spurious qualitative ambiguities are minimized by using relative magnitudes and permissible directions of flow, and by favoring flow sources over effort sources when comparing flow tendencies. The results are output to local flow states of affected components.

We model the economically optimal dynamic oil production decisions for seven production units (fields) on Alaska's North Slope. We use adjustment cost and discount rate to calibrate the model against historical production data, and use the calibrated model to simulate the impact of tax policy on production rate. We construct field-specific cost functions from average cost data and an estimated inverse production function, which incorporates engineering aspects of oil production into our economic modeling. Producers appear to have approximated dynamic optimality. Consistent with prior research, we find that changing the tax rate alone does not change the economically optimal oil production path, except for marginal fields that may cease production. Contrary to prior research, we find that the structure of tax policy can be designed to affect the economically optimal production path, but at a cost in net social benefit. - Highlights: ► We model economically optimal dynamic oil production decisions for 7 Alaska fields. ► Changing tax rate alone does not alter the economically optimal oil production path. ► But change in tax structure can affect the economically optimal oil production path. ► Tax structures that modify the optimal production path reduce net social benefit. ► Field-specific cost functions and inverse production functions are estimated

systems have a broad range of application, such as the manufacture of petroleum based chemicals, pharmaceuticals, and agro-bio products. Major considerations in the design and analysis of biphasic reaction systems are physical and chemical equilibria, kinetic mechanisms, and reaction rates. The primary...... contribution of this thesis is the development of a systematic modelling framework for the biphasic reaction system. The developed framework consists of three modules describing phase equilibria, reactions and mass transfer, and material balances of such processes. Correlative and predictive thermodynamic......Biphasic reaction systems are composed of immiscible aqueous and organic liquid phases where reactants, products, and catalysts are partitioned. These biphasic conditions point to novel synthesis paths, higher yields, and faster reactions, as well as facilitate product separation. The biphasic...

Most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. It is necessary for Japanese electric power companies to assess tornado risks on the plants according to a new regulation in 2013. The new regulatory guide exemplifies a tornado hazard model, which cannot consider the variation of tornado intensity along the path length and consequently produces conservative risk estimates. The guide also recommends the long narrow strip area along the coast line with the width of 5-10 km as a region of interest, although the model tends to estimate inadequate wind speeds due to the limit of application. The purpose of this study is to propose a new tornado hazard model which can be apply to the long narrow strip area. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)

Full Text Available A novel empirical path-loss model for wireless indoor short-range office environment at 4.3–7.3 GHz band is presented. The model is developed based on the experimental datum sampled in 30 office rooms in both line of sight (LOS and non-LOS (NLOS scenarios. The model is characterized as the path loss to distance with a Gaussian random variable X due to the shadow fading by using linear regression. The path-loss exponent n is fitted by the frequency using power function and modeled as a frequency-dependent Gaussian variable as the standard deviation σ of X. The presented works should be available for the research of wireless channel characteristics under universal indoor short-distance environments in the Internet of Things (IOT.

Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589

Full Text Available Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMO in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion / deletion, expansion / contraction, acceleration / deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans’ heading and path perception robust in the presence of such IMOs.

Wargaming is a process of thinking through and visualizing events that could occur during a possible course of action. Over the past 200 years, wargaming has matured into a set of formalized processes. One area of growing interest is the application of agent-based modeling. Agent-based modeling and its additional supporting technologies has potential to introduce a third-generation wargaming capability to the Army, creating a positive overmatch decision-making capability. In its simplest form, agent-based modeling is a computational technique that helps the modeler understand and simulate how the "whole of a system" responds to change over time. It provides a decentralized method of looking at situations where individual agents are instantiated within an environment, interact with each other, and empowered to make their own decisions. However, this technology is not without its own risks and limitations. This paper explores a technology roadmap, identifying research topics that could realize agent-based modeling within a tactical wargaming context.

Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.

This study introduces the use of a modified Longley-Rice irregular terrain model and digital elevation data representative of an analogue lunar site for the prediction of RF path loss over the lunar surface. The results are validated by theoretical models and past Apollo studies. The model is used to approximate the path loss deviation from theoretical attenuation over a reflecting sphere. Analysis of the simulation results provides statistics on the fade depths for frequencies of interest, and correspondingly a method for determining the maximum range of communications for various coverage confidence intervals. Communication system engineers and mission planners are provided a link margin and path loss policy for communication frequencies of interest.

method, suitable for separation and purification of thermally unstable materials whose design and analysis can be efficiently performed through reliable model-based techniques. This paper presents a generalized model for short-path evaporation and highlights its development, implementation and solution...

The purpose of this research was to design and test a model of classroom technology integration in the context of K-12 schools. The proposed multilevel path analysis model includes teacher, contextual, and school related variables on a teacher's use of technology and confidence and comfort using technology as mediators of classroom technology…

I perform a change of field variables in the Schwinger model using the non-invariance of path integral measure under γ 5 transformations. The known equivalence of the model with a bosonic field theory and the Kogut-Susskind dipole mechanism is then derived. (author)

There has been a significant progress in ab initio approaches to the structure of light nuclei. Starting from realistic two- and three-nucleon interactions the ab initio no-core shell model (NCSM) can predict low-lying levels in p-shell nuclei. It is a challenging task to extend ab initio methods to describe nuclear reactions. In this contribution, we present a brief overview of the NCSM with examples of recent applications as well as the first steps taken toward nuclear reaction applications. In particular, we discuss cross section calculations of p+ 6 Li and 6 He+p scattering as well as a calculation of the astrophysically important 7 Be(p, γ) 8 B S-factor

Full Text Available Reduction in traffic congestion and overall number of accidents, especially within the last decade, can be attributed to the enormous progress in active safety. Vehicle path following control with the presence of driver commands can be regarded as one of the important issues in vehicle active safety systems development and more realistic explanation of vehicle path tracking problem. In this paper, an integrated driver/DYC control system is presented that regulates the steering angle and yaw moment, considering driver previewed path. Thus, the driver previewed distance, the heading error and the lateral deviation between the vehicle and desired path are used as inputs. Then, the controller determines and applies a corrective steering angle and a direct yaw moment to make the vehicle follow the desired path. A PID controller with optimized gains is used for the control of integrated driver/DYC system. Genetic Algorithm as an intelligent optimization method is utilized to adapt PID controller gains for various working situations. Proposed integrated driver/DYC controller is examined on lane change manuvers andthe sensitivity of the control system is investigated through the changes in the driver model and vehicle parameters. Simulation results show the pronounced effectiveness of the controller in vehicle path following and stability.

In modern supply chain management systems, Radio Frequency IDentification (RFID) technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products . Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance.

Full Text Available In modern supply chain management systems, Radio Frequency IDentification (RFID technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products . Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance.

A model of d-pairing for superconducting and superfluid Fermi-systems has been formulated within the path integration technique. By path integration over open-quote fastclose quotes and open-quotes slowclose quotes Fermi-fields, the action functional (which determines all properties of model system) has been obtained. This functional could be used for the determination of different superconducting (superfluid) states, for calculation of the transition temperatures for these states, and for the calculation of the collective mode spectrum for HTSC, as well as for heavy fermion superconductors

The structureborne noise path of a six passenger twin-engine aircraft is analyzed. Models of the wing and fuselage structures as well as the interior acoustic space of the cabin are developed and used to evaluate sensitivity to structural and acoustic parameters. Different modeling approaches are used to examine aspects of the structureborne path. These approaches are guided by a number of considerations including the geometry of the structures, the frequency range of interest, and the tractability of the computations. Results of these approaches are compared with experimental data.

The mechanism of the S(N)2 model glycosylation reaction between ethanol, 1,2-ethanediol and methoxymethanol has been studied theoretically at the B3LYP/6-311+G(d,p) computational level. Three different types of reactions have been explored: (i) the exchange of hydroxyl groups between these model systems; (ii) the basic catalysis reactions by combination of the substrates as glycosyl donors (neutral species) and acceptors (enolate species); and (iii) the effect on the reaction profile of an explicit H2O molecule in the reactions considered in (ii). The reaction force, the electronic chemical potential and the reaction electronic flux have been characterized for the reactionpath in each case. Energy calculations show that methoxymethanol is the worst glycosyl donor model among the ones studied here, while 1,2-ethanediol is the best, having the lowest activation barrier of 74.7 kJ mol(-1) for the reaction between this one and the ethanolate as the glycosyl acceptor model. In general, the presence of direct interactions between the atoms involved in the penta-coordinated TS increases the activation energies of the processes.

The problem of time is a central feature of quantum cosmology: differing from ordinary quantum mechanics, in cosmology there is nothing "outside" the system which plays the role of clock, and this makes difficult the obtention of a consistent quantization. A possible solution is to assume that a subset of the variables describing the state of the universe can be a clock for the remaining of the system. Following this line, in this book a new proposal consisting in the previous identification of time by means of gauge fixation is applied to the quantization of homogeneous cosmological models. B

Full Text Available Abstract Background Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models. Results We introduce methods that extend and complete the already introduced approach. For instance, we provide techniques to handle the formation of multi-scaffold complexes as well as receptor dimerization. Furthermore, we discuss a new modeling approach that allows the direct generation of exactly reduced model structures. The developed methods are used to reduce a model of EGF and insulin receptor crosstalk comprising 5,182 ordinary differential equations (ODEs to a model with 87 ODEs. Conclusion The methods, presented in this contribution, significantly enhance the available methods to exactly reduce models of combinatorial reaction networks.

In this paper we study balanced growth path solutions of a Boltzmann mean field game model proposed by Lucas and Moll [15] to model knowledge growth in an economy. Agents can either increase their knowledge level by exchanging ideas in learning events or by producing goods with the knowledge they already have. The existence of balanced growth path solutions implies exponential growth of the overall production in time. We prove existence of balanced growth path solutions if the initial distribution of individuals with respect to their knowledge level satisfies a Pareto-tail condition. Furthermore we give first insights into the existence of such solutions if in addition to production and knowledge exchange the knowledge level evolves by geometric Brownian motion.

Car as one of transportation is inseparable from technological developments. About ten years, there are a lot of research and development on lane keeping system(LKS) which is a system that automaticaly controls the steering to keep the vehicle especially car always on track. This system can be developed for unmanned cars. Unmanned system car requires navigation, guidance and control which is able to direct the vehicle to move toward the desired path. The guidance system is represented by using Dubins-Path that will be controlled by using Model Predictive Control. The control objective is to keep the car’s movement that represented by dinamic lateral motion model so car can move according to the path appropriately. The simulation control on the four types of trajectories that generate the value for steering angle and steering angle changes are at the specified interval.

Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow pathmodel that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.

Small biomolecular systems are inherently stochastic. Indeed, fluctuations of molecular species are substantial in living organisms and may result in significant variation in cellular phenotypes. The chemical master equation (CME) is the most detailed mathematical model that can describe stochastic behaviors. However, because of its complexity the CME has been solved for only few, very small reaction networks. As a result, the contribution of CME-based approaches to biology has been very limited. In this review we discuss the approach of solving CME by a set of differential equations of probability moments, called moment equations. We present different approaches to produce and to solve these equations, emphasizing the use of factorial moments and the zero information entropy closure scheme. We also provide information on the stability analysis of stochastic systems. Finally, we speculate on the utility of CME-based modeling formalisms, especially in the context of synthetic biology efforts. (topical review)

Full Text Available Current network simulators abstract out wireless propagation models due to the high computation requirements for realistic modeling. As such, there is still a large gap between the results obtained from simulators and real world scenario. In this paper, we present a framework for improved path loss simulation built on top of an existing network simulation software, NS-3. Different from the conventional disk model, the proposed simulation also considers the diffraction loss computed using Epstein and Peterson’s model through the use of actual terrain elevation data to give an accurate estimate of path loss between a transmitter and a receiver. The drawback of high computation requirements is relaxed by offloading the computationally intensive components onto an inexpensive off-the-shelf parallel coprocessor, which is a NVIDIA GPU. Experiments are performed using actual terrain elevation data provided from United States Geological Survey. As compared to the conventional CPU architecture, the experimental result shows that a speedup of 20x to 42x is achieved by exploiting the parallel processing of GPU to compute the path loss between two nodes using terrain elevation data. The result shows that the path losses between two nodes are greatly affected by the terrain profile between these two nodes. Besides this, the result also suggests that the common strategy to place the transmitter in the highest position may not always work.

A pathmodel was developed to assess the effects of early campaign cognitions and attitudes on media use and interpersonal communication, subsequent cognitions, attitudes, and vote. Two interpretations of possible outcomes were postulated: agenda setting, and uses and gratifications. It was argued that an agenda-setting interpretation would be…

In order to determine the status quo of PLS pathmodeling in international marketing research, we conducted an exhaustive literature review. An evaluation of double-blind reviewed journals through important academic publishing databases (e.g., ABI/Inform, Elsevier ScienceDirect, Emerald Insight,

Tunnel splitting in biaxial spin models is investigated with a full evaluation of the fluctuation functional integrals of the Euclidean kernel in the framework of spin-coherent-state path integrals which leads to a magnitude of tunnel splitting quantitatively comparable with the numerical results in terms of diagonalization of the Hamilton operator. An additional factor resulted from a global time transformation converting the position-dependent mass to a constant one seems to be equivalent to the semiclassical correction of the Lagrangian proposed by Enz and Schilling. A long standing question whether the spin-coherent-state representation of path integrals can result in an accurate tunnel splitting is therefore resolved

Stochastic modeling is quite powerful in science and technology.The technics derived from this process have been used with great success in laser theory, biological systems and chemical reactions.Besides, they provide a theoretical framework for the analysis of experimental results on the field of particle's diffusion in ordered and disordered materials.In this work we analyze transport processes in one-dimensional fluctuating media, which are media that change their state in time.This fact induces changes in the movements of the particles giving rise to different phenomena and dynamics that will be described and analyzed in this work.We present some random walk models to describe these fluctuating media.These models include state transitions governed by different dynamical processes.We also analyze the trapping problem in a lattice by means of a simple model which predicts a resonance-like phenomenon.Also we study effective diffusion processes over surfaces due to random walks in the bulk.We consider different boundary conditions and transitions movements.We derive expressions that describe diffusion behaviors constrained to bulk restrictions and the dynamic of the particles.Finally it is important to mention that the theoretical results obtained from the models proposed in this work are compared with Monte Carlo simulations.We find, in general, excellent agreements between the theory and the simulations

A general path loss model for in-room radio channels is proposed. The model is based on experimental observations of the behavior of the delay power spectrum in closed rooms. In a given closed room, the early part of the spectrum observed at different positions typically consists of a dominant...... allows for the prediction of path loss, mean delay, and RMS delay spread versus distance. We use measurements to validate the proposed model and we observe good agreement of the model prediction for mean delay and RMS delay spread....... component (peak) that vanishes as the transmitter-receiver distance increases, while the late part decays versus distance according to the same exponential law regardless of this distance. These observations motivate the proposed model of the delay power spectrum with an early dominant component...

Full Text Available Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract

Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory

Rapid lateral flow processes via preferential flow paths are widely accepted to play a key role for rainfall-runoff response in temperate humid headwater catchments. A quantitative description of these processes, however, is still a major challenge in hydrological research, not least because detailed information about the architecture of subsurface flow paths are often impossible to obtain at a natural site without disturbing the system. Our study combines physically based modelling and field observations with the objective to better understand how flow network configurations influence the hydrological response of hillslopes. The system under investigation is a forested hillslope with a small perennial spring at the study area Heumöser, a headwater catchment of the Dornbirnerach in Vorarlberg, Austria. In-situ points measurements of field-saturated hydraulic conductivity and dye staining experiments at the plot scale revealed that shrinkage cracks and biogenic macropores function as preferential flow paths in the fine-textured soils of the study area, and these preferential flow structures were active in fast subsurface transport of artificial tracers at the hillslope scale. For modelling of water and solute transport, we followed the approach of implementing preferential flow paths as spatially explicit structures of high hydraulic conductivity and low retention within the 2D process-based model CATFLOW. Many potential configurations of the flow path network were generated as realisations of a stochastic process informed by macropore characteristics derived from the plot scale observations. Together with different realisations of soil hydraulic parameters, this approach results in a Monte Carlo study. The model setups were used for short-term simulation of a sprinkling and tracer experiment, and the results were evaluated against measured discharges and tracer breakthrough curves. Although both criteria were taken for model evaluation, still several model setups

Despite being a simple and commonly-applied radio optimization technique, the impact on practical network performance from base station antenna downtilt is not well understood. Most published studies based on empirical path loss models report tilt angles and performance gains that are far higher...... than practical experience suggests. We motivate in this paper, based on a practical LTE scenario, that the discrepancy partly lies in the path loss model, and shows that a more detailed semi-deterministic model leads to both lower gains in terms of SINR, outage probability and downlink throughput...... settings, including the use of electrical and/or mechanical antenna downtilt, and therefore it is possible to find multiple optimum tilt profiles in a practical case. A broader implication of this study is that care must be taken when using the 3GPP model to evaluate advanced adaptive antenna techniques...

Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and

A laboratory exercise for the education of students about thermal runaway reactions based on the reaction between aluminum and hydrochloric acid as a modelreaction is proposed. In the introductory part of the exercise, the induction period and subsequent thermal runaway behavior are evaluated via a simple observation of hydrogen gas evolution and…

This paper discusses path analysis of categorical variables with logistic regression models. The total, direct and indirect effects in fully recursive causal systems are considered by using model parameters. These effects can be explained in terms of log odds ratios, uncertainty differences, and an inner product of explanatory variables and a response variable. A study on food choice of alligators as a numerical exampleis reanalysed to illustrate the present approach.

Full Text Available This paper addresses the statistics underlying cloudy sky radiative transfer (RT by inspection of the distribution of the path lengths of solar photons. Recent studies indicate that this approach is promising, since it might reveal characteristics about the diffusion process underlying atmospheric radiative transfer (Pfeilsticker, 1999. Moreover, it uses an observable that is directly related to the atmospheric absorption and, therefore, of climatic relevance. However, these studies are based largely on the accuracy of the measurement of the photon path length distribution (PPD. This paper presents a refined analysis method based on high resolution spectroscopy of the oxygen A-band. The method is validated by Monte Carlo simulation atmospheric spectra. Additionally, a new method to measure the effective optical thickness of cloud layers, based on fitting the measured differential transmissions with a 1-dimensional (discrete ordinate RT model, is presented. These methods are applied to measurements conducted during the cloud radar inter-comparison campaign CLARE’98, which supplied detailed cloud structure information, required for the further analysis. For some exemplary cases, measured path length distributions and optical thicknesses are presented and backed by detailed RT model calculations. For all cases, reasonable PPDs can be retrieved and the effects of the vertical cloud structure are found. The inferred cloud optical thicknesses are in agreement with liquid water path measurements. Key words. Meteorology and atmospheric dynamics (radiative processes; instruments and techniques

Full Text Available This paper addresses the statistics underlying cloudy sky radiative transfer (RT by inspection of the distribution of the path lengths of solar photons. Recent studies indicate that this approach is promising, since it might reveal characteristics about the diffusion process underlying atmospheric radiative transfer (Pfeilsticker, 1999. Moreover, it uses an observable that is directly related to the atmospheric absorption and, therefore, of climatic relevance. However, these studies are based largely on the accuracy of the measurement of the photon path length distribution (PPD. This paper presents a refined analysis method based on high resolution spectroscopy of the oxygen A-band. The method is validated by Monte Carlo simulation atmospheric spectra. Additionally, a new method to measure the effective optical thickness of cloud layers, based on fitting the measured differential transmissions with a 1-dimensional (discrete ordinate RT model, is presented. These methods are applied to measurements conducted during the cloud radar inter-comparison campaign CLARE’98, which supplied detailed cloud structure information, required for the further analysis. For some exemplary cases, measured path length distributions and optical thicknesses are presented and backed by detailed RT model calculations. For all cases, reasonable PPDs can be retrieved and the effects of the vertical cloud structure are found. The inferred cloud optical thicknesses are in agreement with liquid water path measurements.

Full Text Available We present a comparative study of model predictive control approaches of two-wheel steering, four-wheel steering, and a combination of two-wheel steering with direct yaw moment control manoeuvres for path-following control in autonomous car vehicle dynamics systems. Single-track mode, based on a linearized vehicle and tire model, is used. Based on a given trajectory, we drove the vehicle at low and high forward speeds and on low and high road friction surfaces for a double-lane change scenario in order to follow the desired trajectory as close as possible while rejecting the effects of wind gusts. We compared the controller based on both simple and complex bicycle models without and with the roll vehicle dynamics for different types of model predictive control manoeuvres. The simulation result showed that the model predictive control gave a better performance in terms of robustness for both forward speeds and road surface variation in autonomous path-following control. It also demonstrated that model predictive control is useful to maintain vehicle stability along the desired path and has an ability to eliminate the crosswind effect.

The Pathways-to-Man Model was developed at Sandia National Laboratories to represent the environmental movement and human uptake of radionuclides. This model is implemented by the computer program PATH1. The purpose of this document is to present a sequence of examples of facilitate use of the model and the computer program which implements it. Each example consists of a brief description of the problem under consideration, a discussion of the data cards required to input the problem to PATH1, and the resultant program output. These examples are intended for use in conjunction with the technical report which describes the model and the computer progam which implements it (NUREG/CR-1636, Vol 1; SAND78-1711). In addition, a sequence of appendices provides the following: a description of a surface hydrologic system used in constructing several of the examples, a discussion of mixed-cell models, and a discussion of selected mathematical topics related to the Pathways Model. A copy of the program PATH1 is included with the report

The equation of state (EOS) of dense matter and neutrino mean free path (NMFP) in a neutron star have been studied by using relativistic mean field models motivated by effective field theory. It is found that the models predict too large proton fractions, although one of the models (G2) predicts an acceptable EOS. This is caused by the isovector terms. Except G2, the other two models predict anomalous NMFP's. In order to minimize the anomaly, besides an acceptable EOS, a large M* is favorable. A model with large M* retains the regularity in the NMFP even for a small neutron fraction

Highlights: ► Estimation of critical points in Noble-gas clusters. ► Evaluation of first order saddle point or transition states. ► Construction of reactionpath for structural change in clusters. ► Use of Monte-Carlo Simulated Annealing to study structural changes. - Abstract: This paper proposes Simulated Annealing based search to locate critical points in mixed noble gas clusters where Ne and Xe are individually doped in Ar-clusters. Using Lennard–Jones (LJ) atomic interaction we try to explore the search process of transformation through Minimum Energy Path (MEP) from one minimum energy geometry to another via first order saddle point on the potential energy surface of the clusters. Here we compare the results based on diagonalization of the full Hessian all through the search and quasi-gradient only technique to search saddle points and construction of reactionpath (RP) for three sizes of doped Ar-clusters, (Ar) 19 Ne/Xe,(Ar) 24 Ne/Xe and (Ar) 29 Ne/Xe.

This paper proposes a new variable step size FXLMS algorithm with an auxiliary noise power scheduling strategy for online secondary pathmodeling. The step size for the secondary pathmodeling filter and the gain of auxiliary noise are varied in accordance with the parameters available directly. The proposed method has a low computational complexity. Computer simulations show that an active vibration control system with the proposed method gives much better vibration attenuation and modeling accuracy at a faster convergence rate than existing methods. National Instruments’ CompactRIO is used as an embedded processor to control simply supported beam vibration. Experimental results indicate that the vibration of the beam has been effectively attenuated. (papers)

reIn the last decade, Bayesian networks (BNs) have been identified as a powerful tool for human reliability analysis (HRA), with multiple advantages over traditional HRA methods. In this paper we illustrate how BNs can be used to include additional, qualitative causal paths to provide traceability. The proposed framework provides the foundation to resolve several needs frequently expressed by the HRA community. First, the developed extended BN structure reflects the causal paths found in cognitive psychology literature, thereby addressing the need for causal traceability and strong scientific basis in HRA. Secondly, the use of node reduction algorithms allows the BN to be condensed to a level of detail at which quantification is as straightforward as the techniques used in existing HRA. We illustrate the framework by developing a BN version of the critical data misperceived crew failure mode in the IDHEAS HRA method, which is currently under development at the US NRC . We illustrate how the model could be quantified with a combination of expert-probabilities and information from operator performance databases such as SACADA. This paper lays the foundations necessary to expand the cognitive and quantitative foundations of HRA. - Highlights: • A framework for building traceable BNs for HRA, based on cognitive causal paths. • A qualitative BN structure, directly showing these causal paths is developed. • Node reduction algorithms are used for making the BN structure quantifiable. • BN quantified through expert estimates and observed data (Bayesian updating). • The framework is illustrated for a crew failure mode of IDHEAS.

This work presents modeling of experiments on a balanced biaxial (BB) pre-strained AA5754 alloy, subsequently reloaded uniaxially along the rolling direction and transverse direction. The material exhibits a complex plastic deformation response during the change in strain path due to 1) crystallographic texture, 2) aging (interactions between dislocations and Mg atoms) and 3) recovery (annihilation and re-arrangement of dislocations). With a BB prestrain of about 5 %, the aging process is dominant, and the yield strength for uniaxially deformed samples is observed to be higher than the flow stress during BB straining. The strain hardening rate after changing path is, however, lower than that for pre-straining. Higher degrees of pre-straining make the dynamic recovery more active. The dynamic recovery at higher strain levels compensates for the aging effect, and results in: 1) a reduction of the yield strength, and 2) an increase in the hardening rate of re-strained specimens along other directions. The yield strength of deformed samples is further reduced if these samples are left at room temperature to let static recovery occur. The synergistic influences of texture condition, aging and recovery processes on the material response make the modeling of strain path dependence of mechanical behavior of AA5754 challenging. In this study, the influence of crystallographic texture is taken into account by incorporating the latent hardening into a visco-plastic self-consistent model. Different strengths of dislocation glide interaction models in 24 slip systems are used to represent the latent hardening. Moreover, the aging and recovery effects are also included into the latent hardening model by considering strong interactions between dislocations and dissolved atom Mg and the microstructural evolution. These microstructural considerations provide a powerful capability to successfully describe the strain path dependence of plastic deformation behavior of AA5754

This edited book presents the recent developments in partial least squares-pathmodeling (PLS-PM) and provides a comprehensive overview of the current state of the most advanced research related to PLS-PM. The first section of this book emphasizes the basic concepts and extensions of the PLS-PM method. The second section discusses the methodological issues that are the focus of the recent development of the PLS-PM method. The third part discusses the real world application of the PLS-PM method in various disciplines. The contributions from expert authors in the field of PLS focus on topics such as the factor-based PLS-PM, the perfect match between a model and a mode, quantile composite-based pathmodeling (QC-PM), ordinal consistent partial least squares (OrdPLSc), non-symmetrical composite-based pathmodeling (NSCPM), modern view for mediation analysis in PLS-PM, a multi-method approach for identifying and treating unobserved heterogeneity, multigroup analysis (PLS-MGA), the assessment of the common method b...

Full Text Available We review the recent experimental clarification of the fracture path in Liquid Metal Embrittlement with austenitic and martensitic steels. Using state of the art characterization tools (Focused Ion Beam and Transmission Electron Microscopy a clear understanding of crack path is emerging for these systems where a classical fractographic analysis fails to provide useful information. The main finding is that most of the cracking process takes place at grain boundaries, lath or mechanical twin boundaries while cleavage or plastic flow localization is rarely the observed fracture mode. Based on these experimental insights, we sketch an on-going modeling strategy for LME crack initiation and propagation at mesoscopic scale. At the microstructural scale, crystal plasticity constitutive equations are used to model the plastic deformation in metals and alloys. The microstructure used is either extracted from experimental measurements by 3D-EBSD (Electron Back Scattering Diffraction or simulated starting from a Voronoï approach. The presence of a crackwithin the polycrystalline aggregate is taken into account in order to study the surrounding plastic dissipation and the crack path. One key piece of information that can be extracted is the typical order of magnitude of the stress-strain state at GB in order to constrain crack initiation models. The challenges of building predictive LME cracking models are outlined.

Routine reaction to approaching disruptions in tokamaks is currently largely limited to machine protection by mitigating an ongoing disruption, which remains a basic requirement for ITER and DEMO [1]. Nevertheless, a mitigated disruption still generates stress to the device. Additionally, in future fusion devices, high-performance discharge time itself will be very valuable. Instead of reacting only on generic features, occurring shortly before the disruption, the ultimate goal is to actively avoid approaching disruptions at an early stage, sustain the discharges whenever possible and restrict mitigated disruptions to major failures. Knowledge of the most relevant root causes and the corresponding chain of events leading to disruption, the disruption path, is a prerequisite. For each disruption path, physics-based sensors and adequate actuators must be defined and their limitations considered. Early reaction facilitates the efficiency of the actuators and enhances the probability of a full recovery. Thus, sensors that detect potential disruptions in time are to be identified. Once the entrance into a disruption path is detected, we propose a hierarchy of actions consisting of (I) recovery of the discharge to full performance or at least continuation with a less disruption-prone backup scenario, (II) complete avoidance of disruption to sustain the discharge or at least delay it for a controlled termination and, (III), only as last resort, a disruption mitigation. Based on the understanding of disruption paths, a hierarchical and path-specific handling strategy must be developed. Such schemes, testable in present devices, could serve as guidelines for ITER and DEMO operation. For some disruption paths, experiments have been performed at ASDEX Upgrade and TCV. Disruptions were provoked in TCV by impurity injection into ELMy H-mode discharges and in ASDEX Upgrade by forcing a density limit in H-mode discharges. The new approach proposed in this paper is discussed for

The authors present a path-integral formulation of the nuclear shell model using auxillary fields; the path-integral is evaluated by Monte Carlo methods. The method scales favorably with valence-nucleon number and shell-model basis: full-basis calculations are demonstrated up to the rare-earth region, which cannot be treated by other methods. Observables are calculated for the ground state and in a thermal ensemble. Dynamical correlations are obtained, from which strength functions are extracted through the Maximum Entropy method. Examples in the s-d shell, where exact diagonalization can be carried out, compared well with exact results. The open-quotes sign problemclose quotes generic to quantum Monte Carlo calculations is found to be absent in the attractive pairing-plus-multipole interactions. The formulation is general for interacting fermion systems and is well suited for parallel computation. The authors have implemented it on the Intel Touchstone Delta System, achieving better than 99% parallelization

Vertical acceleration recordings of 21 underground nuclear explosions recorded at stations at Yucca Mountain provide the data for development of three two-dimensional crystal velocity profiles for portions of the Nevada Test Site. Paths from Area 19, Area 20 (both Pahute Mesa), and Yucca Flat to Yucca Mountain have been modeled using asymptotic ray theory travel time and synthetic seismogram techniques. Significant travel time differences exist between the Yucca Flat and Pahute Mesa source areas; relative amplitude patterns at Yucca Mountain also shift with changing source azimuth. The three models, UNEPM1, UNEPM2, and UNEYF1, successfully predict the travel time and amplitude data for all three paths. 24 refs., 34 figs., 8 tabs

We proposed an optical simulation model for the quantum dot (QD) nanophosphor based on the mean free path concept to understand precisely the optical performance of optoelectronic devices. A measurement methodology was also developed to get the desired optical characteristics such as the mean free path and absorption spectra for QD nanophosphors which are to be incorporated into the simulation. The simulation results for QD-based white LED and OLED displays show good agreement with the experimental values from the fabricated devices in terms of spectral power distribution, chromaticity coordinate, CCT, and CRI. The proposed simulation model and measurement methodology can be applied easily to the design of lots of optoelectronics devices using QD nanophosphors to obtain high efficiency and the desired color characteristics.

This study examines the relationship of role parenting behaviors and adolescent depression in adolescent outcomes. Parenting behaviors considered were authoritative parenting, parental monitoring, and parental care. Adolescent outcomes considered were depression, alcohol use, tobacco use, and grades. A pathmodel was employed to examine these variables together. A sample of (n=3,174) of 9th -12th grade high school students from seven contiguous counties in rural Virginia were examined on ...

While many hospitals are re-evaluating their current Picture Archiving and Communication System (PACS), few have a mature strategy for PACS deployment. Furthermore, strategies for implementation, strategic and situational planning methods for the evolution of PACS maturity are scarce in the scientific literature. Consequently, in this paper we propose a strategic planning method for PACS deployment. This method builds upon a PACS maturity model (PMM), based on the elaboration of the strategic alignment concept and the maturity growth path concept previously developed in the PACS domain. First, we review the literature on strategic planning for information systems and information technology and PACS maturity. Secondly, the PMM is extended by applying four different strategic perspectives of the Strategic Alignment Framework whereupon two types of growth paths (evolutionistic and revolutionary) are applied that focus on a roadmap for PMM. This roadmap builds a path to get from one level of maturity and evolve to the next. An extended method for PACS strategic planning is developed. This method defines eight distinctive strategies for PACS strategic situational planning that allow decision-makers in hospitals to decide which approach best suits their hospitals' current situation and future ambition and what in principle is needed to evolve through the different maturity levels. The proposed method allows hospitals to strategically plan for PACS maturation. It is situational in that the required investments and activities depend on the alignment between the hospital strategy and the selected growth path. The inclusion of both strategic alignment and maturity growth path concepts make the planning method rigorous, and provide a framework for further empirical research and clinical practice.

Phase separation in substituted pyridines in water is usually described as an interplay between temperature-driven breakage of hydrogen bonds and the associating interaction of the van der Waals force. In previous quantum-chemical studies, the strength of hydrogen bonding between one water and one pyridine molecules (the 1:1 complex) was assigned a pivotal role. It was accepted that the disassembly of the 1:1 complex at a critical temperature leads to phase separation and formation of the miscibility gap. Yet, for over two decades, notable empirical data and theoretical arguments were presented against that view, thus revealing the need in a revised quantum-mechanical description. In the present study, pyridine-water and 2,6-dimethylpyridine-water systems at different complexation stages are calculated using high level Kohn-Sham theory. The hydrophobic-hydrophilic properties are accounted for by the polarizable continuum solvation model. Inclusion of solvation in free energy of formation calculations reveals that 1:1 complexes are abundant in the organically rich solvents but higher level oligomers (i.e., 2:1 dimers with two pyridines and one water molecule) are the only feasible stable products in the more polar media. At the critical temperature, the dissolution of the external hydrogen bonds between the 2:1 dimer and the surrounding water molecules induces the demixing process. The 1:1 complex acts as a precursor in the formation of the dimers but is not directly involved in the demixing mechanism. The existence of the miscibility gap in one pyridine-water system and the lack of it in another is explained by the ability of the former to maintain stable dimerization. Free energy of formation of several reactionpaths producing the 2:1 dimers is calculated and critically analyzed.

In this paper we propose a relay selection scheme which uses collected location information together with a path loss model for relay selection, and analyze the performance impact of mobility and different error causes on this scheme. Performance is evaluated in terms of bit error rate...... by simulations. The SNR measurement based relay selection scheme proposed previously is unsuitable for use with fast moving users in e.g. vehicular scenarios due to a large signaling overhead. The proposed location based scheme is shown to work well with fast moving users due to a lower signaling overhead...... in these situations. As the location-based scheme relies on a path loss model to estimate link qualities and select relays, the sensitivity with respect to inaccurate estimates of the unknown path loss model parameters is investigated. The parameter ranges that result in useful performance were found...

This paper presents a behavioral model representing the human steering performance in teleoperated unmanned ground vehicles (UGVs). Human steering performance in teleoperation is considerably different from the performance in regular onboard driving situations due to significant communication delays in teleoperation systems and limited information human teleoperators receive from the vehicle sensory system. Mathematical models capturing the teleoperation performance are a key to making the development and evaluation of teleoperated UGV technologies fully simulation based and thus more rapid and cost-effective. However, driver models developed for the typical onboard driving case do not readily address this need. To fill the gap, this paper adopts a cognitive model that was originally developed for a typical highway driving scenario and develops a tuning strategy that adjusts the model parameters in the absence of human data to reflect the effect of various latencies and UGV speeds on driver performance in a teleoperated path-following task. Based on data collected from a human subject test study, it is shown that the tuned model can predict both the trend of changes in driver performance for different driving conditions and the best steering performance of human subjects in all driving conditions considered. The proposed model with the tuning strategy has a satisfactory performance in predicting human steering behavior in the task of teleoperated path following of UGVs. The established model is a suited candidate to be used in place of human drivers for simulation-based studies of UGV mobility in teleoperation systems.

Enthalpies of reaction for the initial steps in the pyrolysis of lignin have been evaluated at the CBS-4m level of theory using fully substituted b-O-4 dilignols. Values for competing unimolecular decomposition reactions are consistent with results previously published for phenethyl phenyl ether models, but with lowered selectivity. Chain propagating reactions of free...

In mobile radio systems, path loss models are necessary for proper planning, interference estimations, frequently assignments and cell parameters which are basic for network planning process as well as Location Based Services (LBS) techniques that are not based on GPS system. Empirical models are the most adjustable models that can be suited to different types of environments. In this paper, the Lee path loss model has been tuned using Least Square (LS) algorithm to fit measured data for TETRA system operating 400 MHz in Riyadh urban and suburbs. Consequently, Lee model's parameter (L0, y) are obtained for the targeted areas. The performance of the tuned Lee model is then compared to the three most widely used empirical path loss models: Hat, ITU-R and Cost 231 Walfisch-Ikegami non line-of-sight (CWI-NLOS) path loss models. The performance criterion selected for the comparison of various empirical path loss models are the Root Mean Square Error (RMSE) and goodness of fit (R2). The RMSE and R2between the actual and predicted data are calculated for various path loss models. It turned that the tuned Lee model outperforms the other empirical models. (author)

Irregular surface topography has revolutionized how seismic traveltime is calculated and the data are processed. There are two main schemes for dealing with an irregular surface in the seismic first-arrival traveltime calculation: (1) expanding the model and (2) flattening the surface irregularities. In the first scheme, a notional infill medium is added above the surface to expand the physical space into a regular space, as required by the eikonal equation solver. Here, we evaluate the chosen propagation velocity in the infill medium through ray path tracking with the eikonal equation-solved traveltime field, and observe that the ray paths will be physically unrealistic for some values of this propagation velocity. The choice of a suitable propagation velocity in the infill medium is crucial for seismic processing of irregular topography. Our model expansion criterion for dealing with surface topography in the calculation of traveltime and ray paths using the eikonal equation highlights the importance of both the propagation velocity of the infill physical medium and the topography gradient. (paper)

Full Text Available Recently, wireless network technologies were designed for most of the applications. Congestion raised in the wireless network degrades the performance and reduces the throughput. Congestion-free network is quit essen- tial in the transport layer to prevent performance degradation in a wireless network. Game theory is a branch of applied mathematics and applied sciences that used in wireless network, political science, biology, computer science, philosophy and economics. e great challenges of wireless network are their congestion by various factors. E ective congestion-free alternate path routing is pretty essential to increase network performance. Stackelberg game theory model is currently employed as an e ective tool to design and formulate conges- tion issues in wireless networks. is work uses a Stackelberg game to design alternate pathmodel to avoid congestion. In this game, leaders and followers are selected to select an alternate routing path. e correlated equilibrium is used in Stackelberg game for making better decision between non-cooperation and cooperation. Congestion was continuously monitored to increase the throughput in the network. Simulation results show that the proposed scheme could extensively improve the network performance by reducing congestion with the help of Stackelberg game and thereby enhance throughput.

to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments......We present the path-relationship model that supports multi-dimensional data modeling and querying. A path-relationship database is composed of sets of paths and sets of relationships. A path is a sequence of related elements (atoms, paths, and sets of paths). A relationship is a binary path...

The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

Full Text Available This paper presents the development of PLS-SEM PathModel of delay factors of Saudi Arabia construction industry focussing on Mecca City. The model was developed and assessed using SmartPLS v3.0 software and it consists of 37 factors/manifests in 7 groups/independent variables and one dependent variable which is delay of the construction projects. The model was rigorously assessed at measurement and structural components and the outcomes found that the model has achieved the required threshold values. At structural level of the model, among the seven groups, the client and consultant group has the highest impact on construction delay with path coefficient β-value of 0.452 and the project management and contract administration group is having the least impact to the construction delay with β-value of 0.016. The overall model has moderate explaining power ability with R2 value of 0.197 for Saudi Arabia construction industry representation. This model will able to assist practitioners in Mecca city to pay more attention in risk analysis for potential construction delay.

In many catalytic reactions lateral interactions between adsorbates are believed to have a strong influence on the reaction rates. We apply a microkinetic model to explore the effect of lateral interactions and how to efficiently take them into account in a simple catalytic reaction. Three differ...... different approximations are investigated: site, mean-field, and quasichemical approximations. The obtained results are compared to accurate Monte Carlo numbers. In the end, we apply the approximations to a real catalytic reaction, namely, ammonia synthesis....

Antenatal maternal mental health problems have numerous consequences for the well-being of both mother and child. This study aimed to test and construct a pertinent model of antenatal depressive symptoms within the conceptual framework of a stress process model. This study utilized a cross-sectional study design. participants were adult women (18 years or older) having a healthy pregnancy, in their third trimester (the mean weeks gestation was 34.71). depressive and anxiety symptoms were measured by Zung's Self-rating Depressive and Anxiety Scale, stress was measured by Pregnancy-related Pressure Scale, social support and coping strategies were measured by Social Support Rating Scale and Simplified Coping Style Questionnaire, respectively. path analysis was applied to examine the hypothesized causal paths between study variables. A total of 292 subjects were enrolled. The final testing model showed good fit, with normed χ (2) = 32.317, p = 0.061, CFI = 0.961, TLI = 0.917, IFI = 0.964, NFI = 0.900, RMSEA = 0.042. This pathmodel supported the proposed model within the theoretical framework of the stress process model. Pregnancy-related stress, financial strain and active coping have both direct and indirect effects on depressive symptoms. Psychological preparedness for delivery, social support and anxiety levels have direct effects on antenatal depressive symptoms. Good preparedness for delivery could reduce depressive symptoms, while higher levels of anxiety could significantly increase depressive symptoms. Additionally, there were indirect effects of miscarriage history, irregular menstruation, partner relationship and passive coping with depressive symptoms. The empirical support from this study has enriched theories on the determinants of depressive symptoms among Chinese primipara, and could facilitate the formulation of appropriate interventions for reducing antenatal depressive symptoms, and enhancing the mental health of

Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

The intelligent CAD/CAM system named VIRTUAL MANUFACTURE is created. It is consisted of four intelligent software modules: the module for virtual NC machine creation, the module for geometric product modeling and automatic NC path generation, the module for virtual NC machining and the module for virtual product evaluation. In this paper the second intelligent software module is presented. This module enables feature-based product modeling carried out via automatic saving of the designed product geometric features as knowledge data. The knowledge data are afterwards applied for automatic NC program generation for the designed product NC machining. (Author)

Full Text Available The working principle of the refractive-type fiber optic liquid level sensor is analyzed in detail based on the light refraction principle. The optic pathmodels are developed in consideration of common simplification and the residual liquid film on the glass tube wall. The calculating formulae for the model are derived, constraint conditions are obtained, influencing factors are discussed, and the scopes and skills of application are analyzed through instance simulations. The research results are useful in directing the correct usage of the fiber optic liquid level sensor, especially in special cases, such as those involving viscous liquid in the glass tube monitoring.

We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

This book describes and characterizes an extension to the classical path coupling method applied to statistical mechanical models, referred to as aggregate path coupling. In conjunction with large deviations estimates, the aggregate path coupling method is used to prove rapid mixing of Glauber dynamics for a large class of statistical mechanical models, including models that exhibit discontinuous phase transitions which have traditionally been more difficult to analyze rigorously. The book shows how the parameter regions for rapid mixing for several classes of statistical mechanical models are derived using the aggregate path coupling method.

This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions...

Profiling of complex proteins by means of mass spectrometry (MS) frequently requires that certain chemical modifications of their covalent structure (e.g., reduction of disulfide bonds), be carried out prior to the MS or MS/MS analysis. Traditionally, these chemical reactions take place in the off-line mode to allow the excess reagents (the majority of which interfere with the MS measurements and degrade the analytical signal) to be removed from the protein solution prior to MS measurements. In addition to a significant increase in the analysis time, chemical reactions may result in a partial or full loss of the protein if the modifications adversely affect its stability, e.g,, making it prone to aggregation. In this work we present a new approach to solving this problem by carrying out the chemical reactions online using the reactive chromatography scheme on a size exclusion chromatography (SEC) platform with MS detection. This is achieved by using a cross-pathreaction scheme, i.e., by delaying the protein injection onto the SEC column (with respect to the injection of the reagent plug containing a disulfide-reducing agent), which allows the chemical reactions to be carried out inside the column for a limited (and precisely controlled) period of time, while the two plugs overlap inside the column. The reduced protein elutes separately from the unconsumed reagents, allowing the signal suppression in ESI to be avoided and enabling sensitive MS detection. The new method is used to measure fucosylation levels of a plasma protein haptoglobin at the whole protein level following online reduction of disulfide-linked tetrameric species to monomeric units. The feasibility of top-down fragmentation of disulfide-containing proteins is also demonstrated using β 2 -microglobulin and a monoclonal antibody (mAb). The new online technique is both robust and versatile, as the cross-path scheme can be readily expanded to include multiple reactions in a single experiment (as

A contaminant, which also contains a polymer is in the form of droplets on a solid surface. It is to be removed by the action of a decontaminant, which is applied in aqueous solution. The contaminant is only sparingly soluble in water, so the reaction mechanism is that it slowly dissolves...

While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A pathmodel was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual pathmodel demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

This paper presents an approach to a comparative evaluation of the predictive ability of spallation reactionmodels based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reactionmodels in the presence of the interaction of high-energy protons with natPb.

Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

The Commission of Inquiry on ''Future nuclear energy policy'' of the 8th Deutscher Bundestag has examined the question of the longterm exploitation of nuclear energy in the Federal Republic of Germany within a more general framework of energy policy and, for this purpose, created the concept of energy paths. To calculate these energy paths, the SOPKA-E simulation model has been developed and applied at the Karlsruhe Nuclear Research Center. In Chapter 2, the central part of this report, the form and contents of pathmodeling are described in detail. To help readers understand the energy paths concept, the general background of energy policy in the seventies, which gave rise to the contents of the energy paths, is outlined in a survey article in Chapter 1. Chapter 3 is a description of the energy projections contained in the joint expert opinion on the third updated version of the Energy Program in the light of the energy paths. In Chapter 4 some approaches - albeit fragmentary - are outlined which have been adopted by the Commission of Inquiry of the 9th Deutscher Bundestag in adapting energy paths to the present situation. The presentation in this report of the model computations with SOPKA-E is meant to be a documentation. (orig./UA) [de

Unsafe behavior is closely related to occupational accidents. Work pressure is one the main factors affecting employees' behavior. The aim of the present study was to provide a path analysis model for explaining how work pressure affects safety behavior. Using a self-administered questionnaire, six variables supposed to affect safety employees' behavior were measured. The path analysis model was constructed based on several hypotheses. The goodness of fit of the model was assessed using both absolute and comparative fit indices. Work pressure was determined not to influence safety behavior directly. However, it negatively influenced other variables. Group attitude and personal attitude toward safety were the main factors mediating the effect of work pressure on safety behavior. Among the variables investigated in the present study, group attitude, personal attitude and work pressure had the strongest effects on safety behavior. Managers should consider that in order to improve employees' safety behavior, work pressure should be reduced to a reasonable level, and concurrently a supportive environment, which ensures a positive group attitude toward safety, should be provided. Replication of the study is recommended.

its exact analytic integration to provide equally simple temperature dependent reaction rate constant. This is mostly due to the discrete internal... discrete rotational mode may be replaced by its continuous analog, the vibrational mode cannot be simplified this way due to large energy spacing...Rogasinsky, “Analysis of the numerical techniques of the direct simulation Monte Carlo method in the rarefied gas dynamics,” Russ. J. Numer. Anal. Math

A series of reaction centers of Rhodococcus capsulatus isolated from a set of mutated organisms modified by site-directed mutagenesis at residues M208 and L181 are described. Changes in the amino acid at these sites affect both the energetics of the systems as well as the chemical kinetics for the initial ET event. Two empirical relations among the different mutants for the reduction potential and the ET rate are presented.

A series of reaction centers of Rhodococcus capsulatus isolated from a set of mutated organisms modified by site-directed mutagenesis at residues M208 and L181 are described. Changes in the amino acid at these sites affect both the energetics of the systems as well as the chemical kinetics for the initial ET event. Two empirical relations among the different mutants for the reduction potential and the ET rate are presented.

Electronic structure methods based on density functional theory are used to construct a reactionpath Hamiltonian for CH 4 dissociation on the Ni(100) and Ni(111) surfaces. Both quantum and quasi-classical trajectory approaches are used to compute dissociative sticking probabilities, including all molecular degrees of freedom and the effects of lattice motion. Both approaches show a large enhancement in sticking when the incident molecule is vibrationally excited, and both can reproduce the mode specificity observed in experiments. However, the quasi-classical calculations significantly overestimate the ground state dissociative sticking at all energies, and the magnitude of the enhancement in sticking with vibrational excitation is much smaller than that computed using the quantum approach or observed in the experiments. The origin of this behavior is an unphysical flow of zero point energy from the nine normal vibrational modes into the reaction coordinate, giving large values for reaction at energies below the activation energy. Perturbative assumptions made in the quantum studies are shown to be accurate at all energies studied

Full Text Available The present paper demonstrates that insights from the affordances perspective can contribute to developing a more comprehensive model of grammaticalization. The authors argue that the grammaticalization process is afforded differently depending on the values of three contributing parameters: the factor (schematized as a qualitative-quantitative map or a wave of a gram, environment (understood as the structure of the stream along which the gram travels, and actor (narrowed to certain cognitive-epistemological capacities of the users, in particular to the fact of being a native speaker. By relating grammaticalization to these three parameters and by connecting it to the theory of optimization, the proposed model offers a better approximation to realistic cases of grammaticalization: The actor and environment are overtly incorporated into the model and divergences from canonical grammaticalization paths are both tolerated and explicable.

Liner shipping networks are the backbone of international trade providing low transportation cost, which is a major driver of globalization. These networks are under constant pressure to deliver capacity, cost eectiveness and environmentally conscious transport solutions. This article proposes...... a new path based MIP model for the Liner shipping Network Design Problem minimizing the cost of vessels and their fuel consumption facilitating a green network. The proposed model reduces problem size using a novel aggregation of demands. A decomposition method enabling delayed column generation...... is presented. The subproblems have similar structure to Vehicle Routing Problems, which can be solved using dynamic programming. An algorithm has been implemented for this model, unfortunately with discouraging results due to the structure of the subproblem and the lack of proper dominance criteria...

This paper aims at providing a clear knowledge of Path Loss (PL) to the researcher. The important data have been extracted from the papers and mentioned in clear and precise manner. The limited studies were based on identification of PL due to FM frequency. Majority of studies based on identification of PL considering telephonic frequency as a source. In this paper the PL in urban and rural areas of different places due to various factors like buildings, trees, antenna height, forest etc. have been studied. The common parameters like frequency, model and location based studies were done. The studies were segregated based on various parameters in tabular format and they were compared based on frequency, location and best fit model in that table. Scatter chart was drawn in order to make the things clearer and more understandable. However, location specific PL models are required to investigate the RF propagation in identified terrain.

Most algorithms for least-cost path analysis usually calculate the slope gradient between the source cell and the adjacent cells to reflect the weights for terrain slope into the calculation of travel costs. However, these algorithms have limitations that they cannot analyze the least-cost path between two cells when obstacle cells with very high or low terrain elevation exist between the source cell and the target cell. This study presents a new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes to find possible paths satisfying the constraint of maximum or minimum slope gradient. The new algorithm calculates the slope gradient between the center cell and non-adjacent cells using the concept of extended move-sets. If the algorithm finds possible paths between the center cell and non-adjacent cells with satisfying the constraint of slope condition, terrain elevation of obstacle cells existing between two cells is corrected from the digital elevation model. After calculating the cumulative travel costs to the destination by reflecting the weight of the difference between the original and corrected elevations, the algorithm analyzes the least-cost path. The results of applying the proposed algorithm to the synthetic data sets and the real-world data sets provide proof that the new algorithm can provide more accurate least-cost paths than other conventional algorithms implemented in commercial GIS software such as ArcGIS.

events or by producing goods with the knowledge they already have. The existence of balanced growth path solutions implies exponential growth of the overall production in time. We prove existence of balanced growth path solutions if the initial

In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080

A new path integral representation of Lorentzian Engle–Pereira–Rovelli–Livine spinfoam model is derived by employing the theory of unitary representation of SL(2,C). The path integral representation is taken as a starting point of semiclassical analysis. The relation between the spinfoam model and classical simplicial geometry is studied via the large-spin asymptotic expansion of the spinfoam amplitude with all spins uniformly large. More precisely, in the large-spin regime, there is an equivalence between the spinfoam critical configuration (with certain nondegeneracy assumption) and a classical Lorentzian simplicial geometry. Such an equivalence relation allows us to classify the spinfoam critical configurations by their geometrical interpretations, via two types of solution-generating maps. The equivalence between spinfoam critical configuration and simplical geometry also allows us to define the notion of globally oriented and time-oriented spinfoam critical configuration. It is shown that only at the globally oriented and time-oriented spinfoam critical configuration, the leading-order contribution of spinfoam large-spin asymptotics gives precisely an exponential of Lorentzian Regge action of General Relativity. At all other (unphysical) critical configurations, spinfoam large-spin asymptotics modifies the Regge action at the leading-order approximation. (paper)

PURPOSE: The objective of this study was to examine the interrelationships among individualism, collectivism, homosexuality-related stigma, social support, and condom use among Chinese homosexual men. METHODS: A cross-sectional study using the respondent-driven sampling approach was conducted among 351 participants in Shenzhen, China. Path analytic modeling was used to analyze the interrelationships. RESULTS: The results of path analytic modeling document the following statistically significant associations with regard to homosexuality: (1) higher levels of vertical collectivism were associated with higher levels of public stigma [β (standardized coefficient) = 0.12] and self stigma (β = 0.12); (2) higher levels of vertical individualism were associated with higher levels self stigma (β = 0.18); (3) higher levels of horizontal individualism were associated with higher levels of public stigma (β = 0.12); (4) higher levels of self stigma were associated with higher levels of social support from sexual partners (β = 0.12); and (5) lower levels of public stigma were associated with consistent condom use (β = -0.19). CONCLUSIONS: The findings enhance our understanding of how individualist and collectivist cultures influence the development of homosexuality-related stigma, which in turn may affect individuals' decisions to engage in HIV-protective practices and seek social support. Accordingly, the development of HIV interventions for homosexual men in China should take the characteristics of Chinese culture into consideration.

This study examined the fitness of a pathmodel on the relationship among stress, self-esteem, aggression, depression, suicidal ideation, and violent behavior for adolescents. The subjects consisted of 1,177 adolescents. Data was collected through self-report questionnaires. The data was analyzed by the SPSS and AMOS programs. Stress, self-esteem, aggression, and depression showed a direct effect on suicidal ideation for adolescents, while stress, self-esteem, and aggression showed an indirect effect on suicidal ideation for adolescents. Stress, self-esteem, aggression, and suicidal ideation showed a direct effect on violent behavior for adolescents, while stress, self-esteem, aggression, and depression showed an indirect effect on violent behavior for adolescents. The modified pathmodel of adolescent's suicidal ideation and violent behavior was proven correct. These results suggest that adolescent's suicidal ideation and violent behavior can be decreased by reducing stress, aggression, and depression and increasing self-esteem. Based on the outcomes of this study, it is necessary to design an intervention program that emphasizes reducing stress, aggression, and depression and increasing self-esteem in order to decrease adolescents' suicide ideation and violence.

In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.

Purpose The objective of this study was to examine the interrelationships among individualism, collectivism, homosexuality-related stigma, social support, and condom use among Chinese homosexual men. Methods A cross-sectional study using the respondent-driven sampling approach was conducted among 351 participants in Shenzhen, China. Path analytic modeling was used to analyze the interrelationships. Results The results of path analytic modeling document the following statistically significant associations with regard to homosexuality: (1) higher levels of vertical collectivism were associated with higher levels of public stigma [β (standardized coefficient) = 0.12] and self stigma (β = 0.12); (2) higher levels of vertical individualism were associated with higher levels self stigma (β = 0.18); (3) higher levels of horizontal individualism were associated with higher levels of public stigma (β = 0.12); (4) higher levels of self stigma were associated with higher levels of social support from sexual partners (β = 0.12); and (5) lower levels of public stigma were associated with consistent condom use (β = −0.19). Conclusions The findings enhance our understanding of how individualist and collectivist cultures influence the development of homosexuality-related stigma, which in turn may affect individuals’ decisions to engage in HIV-protective practices and seek social support. Accordingly, the development of HIV interventions for homosexual men in China should take the characteristics of Chinese culture into consideration. PMID:21731850

Iron-ore reduction has attracted much interest in the last three decades since it can be considered as a core process in steel industry. The iron-ore is reduced to iron with the use of blast furnace and fluidized bed technologies. To investigate the harsh conditions inside fluidized bed reactors, computational tools can be utilized. One such tool is the CFD-DEM method, in which the gas phase reactions and governing equations are calculated in the Eulerian (CFD) side, whereas the particle reac...

Objective Fruit and vegetable intake (F&V) is influenced by behavioral and environmental factors, but these have rarely been assessed simultaneously. We aimed to quantify the relative influence of supermarket availability, perceptions of the food environment, and shopping behavior on F&V intake. Design A cross-sectional study. Setting Eight-counties in South Carolina, USA, with verified locations of all supermarkets. Subjects A telephone survey of 831 household food shoppers ascertained F&V intake with a 17-item screener, primary food store location, shopping frequency, perceptions of healthy food availability, and calculated GIS-based supermarket availability. Path analysis was conducted. We report standardized beta coefficients on paths significant at the 0.05 level. Results Frequency of grocery shopping at primary food store (β=0.11) was the only factor exerting an independent, statistically significant direct effect on F&V intake. Supermarket availability was significantly associated with distance to food store (β=-0.24) and shopping frequency (β=0.10). Increased supermarket availability was significantly and positively related to perceived healthy food availability in the neighborhood (β=0.18) and ease of shopping access (β=0.09). Collectively considering all modelpaths linked to perceived availability of healthy foods, this measure was the only other factor to have a significant total effect on F&V intake. Conclusions While the majority of literature to date has suggested an independent and important role of supermarket availability for F&V intake, our study found only indirect effects of supermarket availability and suggests that food shopping frequency and perceptions of healthy food availability are two integral components of a network of influences on F&V intake. PMID:24192274

Most industrial metal forming processes are characterised by a complex strain path history. A change in strain path may have a significant effect on the mechanical response of metals. This paper concentrates on the role of the plastic slip anisotropy in the strain path dependency of materials

A structurally highly simplified, globally integrated coupled climate-economic costs model SIAM (Structural Integrated Assessment Model) is used to compute optimal paths of global CO 2 emissions that minimize the net sum of climate damage and mitigation costs. It studies the sensitivity of the computed optimal emission paths. The climate module is represented by a linearized impulse-response model calibrated against a coupled ocean-atmosphere general circulation climate model and a three-dimensional global carbon-cycle model. The cost terms are presented by expressions designed with respect to input assumptions. These include the discount rates for mitigation and damage costs, the inertia of the socio-economic system, and the dependence of climate damages on the changes in temperature and the rate of change of temperature. Different assumptions regarding these parameters are believed to cause the marked divergences of existing cost-benefit analyses. The long memory of the climate system implies that very long time horizons of several hundred years need to be considered to optimize CO 2 emissions on time scales relevant for a policy of sustainable development. Cost-benefit analyses over shorter time scales of a century or two can lead to dangerous underestimates of the long term climate impact of increasing greenhouse-gas emissions. To avert a major long term global warming, CO 2 emissions need to be reduced ultimately to very low levels. This may be done slowly but should not be interpreted as providing a time cushion for inaction: the transition becomes more costly the longer the necessary mitigation policies are delayed. However, the long time horizon provides adequate flexibility for later adjustments. Short term energy conservation alone is insufficient and can be viewed only as a useful measure in support of the necessary long term transition to carbon-free energy technologies. 46 refs., 9 figs., 2 tabs

Describing the reactions that occur at the glass-water interface and control the development of the altered layer constitutes one of the main scientific challenges impeding existing models from providing accurate radionuclide release estimates. Radionuclide release estimates are a critical component of the safety basis for geologic repositories. The altered layer (i.e., amorphous hydrated surface layer and crystalline reaction products) represents a complex region, both physically and chemically, sandwiched between two distinct boundaries-pristine glass surface at the inner most interface and aqueous solution at the outer most interface. Computational models, spanning different length and timescales, are currently being developed to improve our understanding of this complex and dynamic process with the goal of accurately describing the mesoscale changes that occur as the system evolves. These modeling approaches include geochemical simulations (i.e., classical reactionpath simulations and glass reactivity in allowance for alteration layer simulations), Monte Carlo simulations, and molecular dynamics methods. Discussed in this manuscript are the advances and limitations of each modeling approach placed in the context of the glass-water reaction and how collectively these approaches provide insights into the mechanisms that control the formation and evolution of altered layers. New results are presented as examples of each approach. (authors)

We consider a class of analytically solvable models of reaction-diffusion systems. An analytical treatment is possible because the nonlinear reaction term is approximated by a piecewise linear function. As particular examples we choose front and pulse solutions to illustrate the matching procedure in the one-dimensional case.

In Brazil, one of the most important telecommunications systems is broadcast television. Such relevance demands an extensive analysis to be performed chasing technical excellence in order to offer a better digital transmission to the user. Therefore, it is mandatory to evaluate the quality and strength of the Digital TV signal, through studies of coverage predictions models, allowing stations to be projected in a way that their respective signals are harmoniously distributed. The purpose of this study is to appraise measurements of digital television signal obtained in the field and to compare them with numerical results from the simulation of the Dominant PathModel. The outcomes indicate possible blocking zones and a low accumulated probability index above the reception threshold, as well as characterise the gain level of the receiving antenna, which would prevent signal blocking.

In this chapter, two models based on the constituent rearrangement picture for large p sub( t) phenomena are summarized. One is the quark-junction model, and the other is the correlating quark rearrangement model. Counting rules of the models apply to both two-body reactions and hadron productions. (author)

Objective. We investigated the feasibility of a novel, customizable, simplified EMG-driven musculoskeletal model for estimating coordinated hand and wrist motions during a real-time path tracing task. Approach. A two-degree-of-freedom computational musculoskeletal model was implemented for real-time EMG-driven control of a stick figure hand displayed on a computer screen. After 5-10 minutes of undirected practice, subjects were given three attempts to trace 10 straight paths, one at a time, with the fingertip of the virtual hand. Able-bodied subjects completed the task on two separate test days. Main results. Across subjects and test days, there was a significant linear relationship between log-transformed measures of accuracy and speed (Pearson’s r = 0.25, p bodied subjects in 8 of 10 trials. For able-bodied subjects, tracing accuracy was lower at the extremes of the model’s range of motion, though there was no apparent relationship between tracing accuracy and fingertip location for the amputee. Our result suggests that, unlike able-bodied subjects, the amputee’s motor control patterns were not accustomed to the multi-joint dynamics of the wrist and hand, possibly as a result of post-amputation cortical plasticity, disuse, or sensory deficits. Significance. To our knowledge, our study is one of very few that have demonstrated the real-time simultaneous control of multi-joint movements, especially wrist and finger movements, using an EMG-driven musculoskeletal model, which differs from the many data-driven algorithms that dominate the literature on EMG-driven prosthesis control. Real-time control was achieved with very little training and simple, quick (~15 s) calibration. Thus, our model is potentially a practical and effective control platform for multifunctional myoelectric prostheses that could restore more life-like hand function for individuals with upper limb amputation.

Path dependency is defined, and three different specific concepts of path dependency – cumulative causation, lock in, and hysteresis – are analyzed. The relationships between path dependency and equilibrium, and path dependency and fundamental uncertainty are also discussed. Finally, a typology of dynamical systems is developed to clarify these relationships.

...) simulations to determine rotational motion of the spacecraft. The main objective of this study was to assess the reaction control system models and their effects on the atmospheric flight of Odyssey...

This paper presents a linear, asymptotic stability analysis for a reaction-diffusionconvection system modeling atherogenesis, the initiation of atherosclerosis, as an inflammatory instability. Motivated by the disease paradigm articulated by Ross

The application of the hierarchy model of nuclear reaction is discussed, and the hierarchy model means that the compound nucleus state is formed after several steps, at least, one step of reaction. This model was applied to the analysis of the observed cross sections of 235 U and some other elements. Neglecting exchange scattering effect, the equations for the total neutron cross section of 235 U were obtained. One of these equations describes explicitly the hierarchy of the transition from intermediate reaction state Xm into the compound nucleus state Xs, and another one describes the cross section averaged over an energy interval larger than the average level spacing of compound nucleus eigenvalues. The hierarchy of reaction mechanism was investigated in more detail, and the hierarchy model was applied to the case of unresolved energy region. It was not tried to evaluate the strength function in the mass region (A>140), since the effect of nuclear deformation was neglected in the task. (Iwase, T.)

To help minimize risk of high sinkage and slippage during drives and to better understand soil properties and rover terramechanics from drive data, a multidisciplinary team was formed under the Mars Exploration Rover (MER) project to develop and utilize dynamic computer-based models for rover drives over realistic terrains. The resulting tool, named ARTEMIS (Adams-based Rover Terramechanics and Mobility Interaction Simulator), consists of the dynamic model, a library of terramechanics subroutines, and the high-resolution digital elevation maps of the Mars surface. A 200-element model of the rovers was developed and validated for drop tests before launch, using MSC-Adams dynamic modeling software. Newly modeled terrain-rover interactions include the rut-formation effect of deformable soils, using the classical Bekker-Wong implementation of compaction resistances and bull-dozing effects. The paper presents the details and implementation of the model with two case studies based on actual MER telemetry data. In its final form, ARTEMIS will be used in a predictive manner to assess terrain navigability and will become part of the overall effort in path planning and navigation for both Martian and lunar rovers.

We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

The formulations used for precompound decay models are presented and explained in terms of the physics of the intranuclear cascade model. Several features of spectra of medium energy (10--1000 MeV) reactions are summarized. Results of precompound plus evaporation calculations from the code ALICE are compared with a wide body of proton, alpha, and heavy ion induced reaction data to illustrate both the power and deficiencies of predicting yield of these reactions in the medium energy regime. 23 refs., 13 figs

The seek for materials to enhance the oxygen reduction reaction (orr) rate is a highly relevant topic due to its implication in fuel cell devices. Herein, the orr on bimetallic electrocatalysts based on Au-M (M = Pt, Pd) has been studied computationally, by performing density functional theory calculations. Bimetallic (1 0 0) electrode surfaces with two different Au:M ratios were proposed, and two possible pathways, associative and dissociative, were considered for the orr. Changes in the electronic properties of these materials with respect to the pure metals were acknowledged to gain understanding in the overall reactivity trend. The effect of the bimetallic junction on the stability of the intermediates O2 and OOH was also evaluated by means of geometrical and energetic parameters; being the intermediates preferably adsorbed on Pt/Pd atoms, but presenting in some cases higher adsorption energies compared with bare metals. Finally, the kinetics of the Osbnd O bond breaking in O2∗ and OOH∗ adsorbed intermediates in the bimetallic materials and the influence of the Au-M junction were studied by means of the nudge elastic-band method. A barrierless process for the scission of O2∗ was found in Au-M for the higher M ratios. Surprisingly, for Au-M with lower M ratios, the barriers were much lower than for pure Au surfaces, suggesting a highly reactive surface towards the orr. The Osbnd O scission of the OOH∗ was found to be a barrierless process in Ausbnd Pt systems and nearly barrierless in all Ausbnd Pd systems, implying that the reduction ofO2 in these systems proceeds via the full reduction of O2 to H2O , avoiding H2O2 formation.

A hard real-time system, such as a fly-by-wire system, fails catastrophically (e.g. losing stability) if its control inputs are not updated by its digital controller computer within a certain timing constraint called the hard deadline. To assess and validate those systems' reliabilities by using a semi-Markov model that explicitly contains the deadline information, we propose a path-space approach deriving the upper and lower bounds of the probability of system failure. These bounds are derived by using only simple parameters, and they are especially suitable for highly reliable systems which should recover quickly. Analytical bounds are derived for both exponential and Wobble failure distributions encountered commonly, which have proven effective through numerical examples, while considering three repair strategies: repair-as-good-as-new, repair-as-good-as-old, and repair-better-than-old

Multigrid algorithms are presented which, in addition to eliminating the critical slowing down, can also eliminate the open-quotes volume factorclose quotes. The elimination of the volume factor removes the need to produce many independent fine-grid configurations for averaging out their statistical deviations, by averaging over the many samples produced on coarse grids during the multigrid cycle. Thermodynamic limits of observables can be calculated to relative accuracy var-epsilon r in just O(var-epsilon r -2 ) computer operations, where var-epsilon r is the error relative to the standard deviation of the observable. In this paper, we describe in detail the calculation of the susceptibility in the one-dimensional massive Gaussian model, which is also a simple example of path integrals. Numerical experiments show that the susceptibility can be calculated to relative accuracy var-epsilon r in about 8 var-epsilon r -2 random number generations, independent of the mass size

Perturbation of natural subsurface systems by fluid inputs may induce geochemical or microbiological reactions that change porosity and permeability, leading to complex coupled feedbacks between reaction and transport processes. Some examples are precipitation/dissolution processes associated with carbon capture and storage and biofilm growth associated with contaminant transport and remediation. We study biofilm growth due to mixing controlled reaction of multiple substrates. As biofilms grow, pore clogging occurs which alters pore-scale flow paths thus changing the mixing and reaction. These interactions are challenging to quantify using conventional continuum-scale porosity-permeability relations. Pore-scale models can accurately resolve coupled reaction, biofilm growth and transport processes, but modeling at this scale is not feasible for practical applications. There are two approaches to address this challenge. Results from pore-scale models in generic pore structures can be used to develop empirical relations between porosity and continuum-scale parameters, such as permeability and dispersion coefficients. The other approach is to develop a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled by a suitable method that ensures continuity of flux across the interface. Thus, regions of high reactivity where flow alteration occurs are resolved at the pore scale for accuracy while regions of low reactivity are resolved at the continuum scale for efficiency. This approach thus avoids the need for empirical upscaling relations in regions with strong feedbacks between reaction and porosity change. We explore and compare these approaches for several two-dimensional cases.

Full Text Available An ultraviolet (UV signal transmission undergoes rich scattering and strong absorption by atmospheric particulates. We develop a path loss model for a Non-Line-of-Sight (NLOS link. The model is built upon probability theory governing random migration of photons in free space, undergoing scattering, in terms of angular direction and distance. The model analytically captures the contributions of different scattering orders. Thus it relaxes the assumptions of single scattering theory and provides more realistic results. This allows us to assess the importance of high-order scattering, such as in a thick atmosphere environment, where short range NLOS UV communication is enhanced by hazy or foggy weather. By simulation, it is shown that the model coincides with a previously developed Monte Carlo model. Additional numerical examples are presented to demonstrate the effects of link geometry and atmospheric conditions. The results indicate the inherent tradeoffs in beamwidth, pointing angles, range, absorption, and scattering and so are valuable for NLOS communication system design.

The basic features of low to intermediate energy nucleon-induced reactions are discussed within the contexts of the optical model, the statistical model, preequilibrium and intranuclear cascade models. The calculation of cross sections and other scattering quantities are described. (author)

Nuclear quantum effects (NQE), which include both zero-point motion and tunneling, exhibit quite an impressive range of influence over the equilibrium and dynamical properties of molecules and materials. In this work, we extend our recently proposed perturbed path-integral (PPI) approach for modeling NQE in molecular systems [I. Poltavsky and A. Tkatchenko, Chem. Sci. 7, 1368 (2016)], which successfully combines the advantages of thermodynamic perturbation theory with path-integral molecular dynamics (PIMD), in a number of important directions. First, we demonstrate the accuracy, performance, and general applicability of the PPI approach to both molecules and extended (condensed-phase) materials. Second, we derive a series of estimators within the PPI approach to enable calculations of structural properties such as radial distribution functions (RDFs) that exhibit rapid convergence with respect to the number of beads in the PIMD simulation. Finally, we introduce an effective nuclear temperature formalism within the framework of the PPI approach and demonstrate that such effective temperatures can be an extremely useful tool in quantitatively estimating the "quantumness" associated with different degrees of freedom in the system as well as providing a reliable quantitative assessment of the convergence of PIMD simulations. Since the PPI approach only requires the use of standard second-order imaginary-time PIMD simulations, these developments enable one to include a treatment of NQE in equilibrium thermodynamic properties (such as energies, heat capacities, and RDFs) with the accuracy of higher-order methods but at a fraction of the computational cost, thereby enabling first-principles modeling that simultaneously accounts for the quantum mechanical nature of both electrons and nuclei in large-scale molecules and materials.

With the intensification of global warming and continued growth in energy consumption, China is facing increasing pressure to cut its CO 2 (carbon dioxide) emissions down. This paper discusses the driving forces influencing China's CO 2 emissions based on Path-STIRPAT model-a method combining Path analysis with STIRPAT (stochastic impacts by regression on population, affluence and technology) model. The analysis shows that GDP per capita (A), industrial structure (IS), population (P), urbanization level (R) and technology level (T) are the main factors influencing China's CO 2 emissions, which exert an influence interactively and collaboratively. The sequence of the size of factors' direct influence on China's CO 2 emission is A>T>P>R>IS, while that of factors' total influence is A>R>P>T>IS. One percent increase in A, IS, P, R and T leads to 0.44, 1.58, 1.31, 1.12 and -1.09 percentage change in CO 2 emission totally, where their direct contribution is 0.45, 0.07, 0.63, 0.08, 0.92, respectively. Improving T is the most important way for CO 2 reduction in China. - Highlights: → We analyze the driving forces influencing China's CO 2 emissions. → Five macro factors like per capita GDP are the main influencing factors. → These factors exert an influence interactively and collaboratively. → Different factors' direct and total influence on China's CO 2 emission is different. → Improving technology level is the most important way for CO 2 reduction in China.

This paper presents key parameters including the line-of-sight (LOS) probability, large-scale path loss, and shadow fading models for the design of future fifth generation (5G) wireless communication systems in urban macro-cellular (UMa) scenarios, using the data obtained from propagation...... measurements in Austin, US, and Aalborg, Denmark, at 2, 10, 18, and 38 GHz. A comparison of different LOS probability models is performed for the Aalborg environment. Both single-slope and dual-slope omnidirectional path loss models are investigated to analyze and contrast their root-mean-square (RMS) errors...

Full Text Available Reaction furnace is the most important part of the Claus sulfur recovery unit and its performance has a significant impact on the process efficiency. Too many reactions happen in the furnace and their kinetics and mechanisms are not completely understood; therefore, modelingreaction furnace is difficult and several works have been carried out on in this regard so far. Equilibrium models are commonly used to simulate the furnace, but the related literature states that the outlet of furnace is not in equilibrium and the furnace reactions are controlled by kinetic laws; therefore, in this study, the reaction furnace is simulated by a kinetic model. The predicted outlet temperature and concentrations by this model are compared with experimental data published in the literature and the data obtained by PROMAX V2.0 simulator. The results show that the accuracy of the proposed kinetic model and PROMAX simulator is almost similar, but the kinetic model used in this paper has two importance abilities. Firstly, it is a distributed model and can be used to obtain the temperature and concentration profiles along the furnace. Secondly, it is a dynamic model and can be used for analyzing the transient behavior and designing the control system.

Mass transport, coupled with chemical reactions, is modelled as a cellular automaton in which solute molecules perform a random walk on a lattice and react according to a local probabilistic rule. Assuming molecular chaos and a smooth density function, we obtain the standard reaction-transport equations in the continuum limit. The model is applied to the reactions a + b ↔c and a + b →c, where we observe interesting macroscopic effects resulting from microscopic fluctuations and spatial correlations between molecules. We also simulate autocatalytic reaction schemes displaying spontaneous formation of spatial concentration patterns. Finally, we propose and discuss the limitations of a simple model for mineral-solute interaction. (author) 5 figs., 20 refs

Depressive symptoms are a common problem among family caregivers of stroke survivors. The purpose of this study was to examine the association between care recipient's impairment and caregiver depression, and determine the possible mediating effects of caregiver negative problem-orientation, mastery, and leisure time satisfaction. The evaluated model was derived from Pearlin's stress process model of caregiver adjustment. We analyzed baseline data from 122 strained family members who were assisting stroke survivors in Germany for a minimum of 6 months and who consented to participate in a randomized clinical trial. Depressive symptoms were measured with the Center for Epidemiological Studies Depression Scale. The cross-sectional data were analyzed using path analysis. The results show an adequate fit of the model to the data, χ2(1, N = 122) = 0.17, p = .68; comparative fit index = 1.00; root mean square error of approximation: p caregiver depressive symptoms. Results indicate that caregivers at risk for depression reported a negative problem orientation, low caregiving mastery, and low leisure time satisfaction. The situation is particularly affected by the frequency of stroke survivor problematic behavior, and by the degree of their impairments in activities of daily living. The findings provide empirical support for the Pearlin's stress model and emphasize how important it is to target these mediators in health promotion interventions for family caregivers of stroke survivors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

A simple kinematic model based on the concept of an orientation-dependent critical configuration for reaction is introduced and applied. The model serves two complementary purposes. In the predictive mode the model provides an easily implemented procedure for computing the reactivity of oriented reagents (including those actually amenable to measure) from a given potential energy surface. The predictions of the model are compared against classical trajectory results for the H + D 2 reaction. By use of realistic potential energy surfaces the model is applied to the Li + HF and O + HCl reactions where the HX molecules are pumped by a polarized laser. A given classical trajectory is deemed reactive or not according to whether it can surmount the barrier at that particular orientation. The essential difference with the model of Levine and Bernstein is that the averaging over initial conditions is performed by using a Monte Carlo integration. One can therefore use the correct orientation-dependent shape (and not only height) of the barrier to reaction and, furthermore, use oriented or aligned reagents. Since the only numerical step is a Monte Carlo sampling of initial conditions, very many trajectories can be run. This suffices to determine the reaction cross section for different initial conditions. To probe the products, they have employed the kinematic approach of Elsum and Gordon. The result is a model where, under varying initial conditions, examining final-state distributions or screening different potential energy surfaces can be efficiently carried out

We propose a kinetic mechanism of electrochemical interactions. We assume fast formation and recombination of electron donors D- and acceptors A+ on electrode surfaces. These mediators are continuously formed in the electrode matter by thermal fluctuations. The mediators D- and A+, chemically equivalent to the electrode metal, enter electrochemical interactions on the electrode surfaces. Electrochemical dynamics and current-voltage characteristics of a selected electrochemical system are studied. Our results are in good qualitative agreement with those given by the classical Butler-Volmer kinetics. The proposed model can be used to study fast electrochemical processes in microsystems and nanosystems that are often out of the thermal equilibrium. Moreover, the kinetic mechanism operates only with the surface concentrations of chemical reactants and local electric potentials, which facilitates the study of electrochemical systems with indefinable bulk.

One of the central issues of the hadron physics is how to interpret the properties and the origin of nuclear force. Nuclear force is in principle the manifestation of dynamics of quarks and gluons but no trial has been successful yet in describing the nuclear force by using QCD, the fundamental theory of the strong interactions. Phenomenon related to the chiral symmetry and the spontaneous breaking of the chiral symmetry is one of the important phenomena for the understanding of hadron physics. Nambu-Jona-Lasinio (NJL) model is one of the quark system models to explain the phenomena concerning the chiral symmetry. Although the method to deduce the Lagrangian describing mesons by applying the path integral to NJL model has been well known as the bosonization, it has been difficult to extend it to baryons because baryons are three-body system. In this paper, a method is reported to deduce Lagrangian which describes baryon-meson from quark-diquark Lagrangian by assuming that baryons are the bound states of quark and diquark. (S. Funahashi)

In the phase-field description of brittle fracture, the fracture-surface area can be expressed as a functional of the phase field (or damage field). In this work we study the applicability of this explicit expression as a (non-linear) path-following constraint to robustly track the equilibrium path

The general pathmodel (GPM) is one approach for performing degradation-based, or Type III, prognostics. The GPM fits a parametric function to the collected observations of a prognostic parameter and extrapolates the fit to a failure threshold. This approach has been successfully applied to a variety of systems when a sufficient number of prognostic parameter observations are available. However, the parametric fit can suffer significantly when few data are available or the data are very noisy. In these instances, it is beneficial to include additional information to influence the fit to conform to a prior belief about the evolution of system degradation. Bayesian statistical approaches have been proposed to include prior information in the form of distributions of expected model parameters. This requires a number of run-to-failure cases with tracked prognostic parameters; these data may not be readily available for many systems. Reliability information and stressor-based (Type I and Type II, respectively) prognostic estimates can provide the necessary prior belief for the GPM. This article presents the Bayesian updating framework to include prior information in the GPM and compares the efficacy of including different information sources on two data sets.

Aging is associated with decreases in muscle mass, strength, power (sarcopenia) and bone mineral density (BMD). The aims of this study were to investigate in elderly the role of sarcopenia on BMD loss by a pathmodel, including adiposity, inflammation, and malnutrition associations. Body composition and BMD were measured by dual X-ray absorptiometry in 159 elderly subjects (52 male/107 female; mean age 80.3 yrs). Muscle strength was determined with dynamometer. Serum albumin and PCR were also assessed. Structural equations examined the effect of sarcopenia (measured by Relative Skeletal Muscle Mass, Total Muscle Mass, Handgrip, Muscle Quality Score) on osteoporosis (measured by Vertebral and Femoral T-scores) in a latent variable model including adiposity (measured by Total Fat Mass, BMI, Ginoid/Android Fat), inflammation (PCR), and malnutrition (serum albumin). The sarcopenia assumed a role of moderator in the adiposity-osteoporosis relationship. Specifically, increasing the sarcopenia, the relationship adiposity-osteoporosis (β: -0.58) decrease in intensity. Adiposity also influences sarcopenia (β: -0.18). Malnutrition affects the inflammatory and the adiposity states (β: +0.61, and β: -0.30, respectively), while not influencing the sarcopenia. Thus, adiposity has a role as a mediator of the effect of malnutrition on both sarcopenia and osteoporosis. Malnutrition decreases adiposity; decreasing adiposity, in turn, increase the sarcopenia and osteoporosis. This study suggests such as in a group of elderly sarcopenia affects the link between adiposity and BMD, but not have a pure independent effect on osteoporosis.

Highlights: ► Two numerical codes for the evaluation of halo currents in 3D structures are presented. ► A simplified plasma model is adopted to provide the input (halo current injected into the FW). ► Two representative test cases of ITER symmetric and asymmetric VDEs have been analyzed. ► The proposed approaches provide results in excellent agreement for both cases. -- Abstract: Disruptions represent one of the main concerns for Tokamak operation, especially in view of fusion reactors, or experimental test reactors, due to the electro-mechanical loads induced by halo and eddy currents. The development of a predictive tool which allows to estimate the magnitude and spatial distribution of the halo current forces is of paramount importance in order to ensure robust vessel and in-vessel component design. With this aim, two numerical codes (CARIDDI, CAFE) have been developed, which allow to calculate the halo current path (resistive distribution) in the passive structures surrounding the plasma. The former is based on an integral formulation for the eddy currents problem particularized to the static case; the latter implements a pair of 3D FEM complementary formulations for the solution of the steady-state current conduction problem. A simplified plasma model is adopted to provide the inputs (halo current injected into the first wall). Two representative test cases (ITER symmetric and asymmetric VDEs) have been selected to cross check the results of the proposed approaches.

The transport and chemical reactions of solutes are modelled as a cellular automaton in which molecules of different species perform a random walk on a regular lattice and react according to a local probabilistic rule. The model describes advection and diffusion in a simple way, and as no restriction is placed on the number of particles at a lattice site, it is also able to describe a wide variety of chemical reactions. Assuming molecular chaos and a smooth density function, we obtain the standard reaction-transport equations in the continuum limit. Simulations on one-and two-dimensional lattices show that the discrete model can be used to approximate the solutions of the continuum equations. We discuss discrepancies which arise from correlations between molecules and how these discrepancies disappear as the continuum limit is approached. Of particular interest are simulations displaying long-time behaviour which depends on long-wavelength statistical fluctuations not accounted for by the standard equations. The model is applied to the reactions a + b ↔ c and a + b → c with homogeneous and inhomogeneous initial conditions as well as to systems subject to autocatalytic reactions and displaying spontaneous formation of spatial concentration patterns. (author) 9 figs., 34 refs

A physicochemical and numerical model for the transient formation of an electric double-layer between an electrolyte and a chemically-active flat surface is presented, based on a finite elements integration of the nonlinear Nernst-Planck-Poisson model including chemical reactions. The model works...... for symmetric and asymmetric multi-species electrolytes and is not limited to a range of surface potentials. Numerical simulations are presented, for the case of a CaCO3 electrolyte solution in contact with a surface with rate-controlled protonation/deprotonation reactions. The surface charge and potential...... are determined by the surface reactions, and therefore they depends on the bulk solution composition and concentration...

A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.

The ionospheric phenomena which significantly influenced radio propagation during March 17-19, 2015 are considered in the study. The data of oblique ionospheric sounding (OIS) were analyzed at six radio paths. These paths are located in the zone of North Siberia in Russia and have different lengths: from 1000 to 5000 km. The results are the following. The magnetic storm drastically changed the character of radio propagation at all the considered paths: in most cases the reflections from the ionospheric F2-layer were changed by the reflections only from the sporadic Es-layer. The parameters of movement of the disturbance front were estimated on the basis of OIS data of the paths. The average velocity of the front movements from east to west was about V = 440 m/s. Even the moderate growth of riometer absorption within the region of radio paths' locations, resulted in loss of multihop modes in the signal reflections from sporadic layers. It also resulted in a sharp decrease of signal strength at the paths. Real distance-frequency characteristics (DFC) of the paths were compared to DFC calculated on the basis of International Reference Ionosphere (IRI) model. It was revealed that on a quiet day of March, 15th, the real and the calculated DFC are similar or coincide in the majority of cases. During the disturbed days of March, 17-19, most commonly observed are the significant differences between the calculated and the experimental data. The most pronounced difference is revealed while estimating the character of OIS signals' reflections from Es-layers.

The climate models used in the IPCC AR4 show large differences in monthly mean cloud ice. The most valuable source of information that can be used to potentially constrain the models is global satellite data. For this, the data sets must be long enough to capture the inter-annual variability of Ice Water Path (IWP). PATMOS-x was used together with ISCCP for the annual cycle evaluation in Fig. 7 while ECHAM-5 was used for the correlation with other models in Table 3. A clear distinction between ice categories in satellite retrievals, as desired from a model point of view, is currently impossible. However, long-term satellite data sets may still be used to indicate the climatology of IWP spatial distribution. We evaluated satellite data sets from CloudSat, PATMOS-x, ISCCP, MODIS and MSPPS in terms of monthly mean IWP, to determine which data sets can be used to evaluate the climate models. IWP data from CloudSat cloud profiling radar provides the most advanced data set on clouds. As CloudSat data are too short to evaluate the model data directly, it was mainly used here to evaluate IWP from the other satellite data sets. ISCCP and MSPPS were shown to have comparatively low IWP values. ISCCP shows particularly low values in the tropics, while MSPPS has particularly low values outside the tropics. MODIS and PATMOS-x were in closest agreement with CloudSat in terms of magnitude and spatial distribution, with MODIS being the best of the two. As PATMOS-x extends over more than 25 years and is in fairly close agreement with CloudSat, it was chosen as the reference data set for the model evaluation. In general there are large discrepancies between the individual climate models, and all of the models show problems in reproducing the observed spatial distribution of cloud-ice. Comparisons consistently showed that ECHAM-5 is the GCM from IPCC AR4 closest to satellite observations.

Actinide-doped SRL 165 type glass was reacted in J-13 groundwater at 90 degree C for times up to 278 days. The reaction was characterized by both solution and solid analyses. The glass was seen to react nonstoichiometrically with preferred leaching of alkali metals and boron. High resolution electron microscopy revealed the formation of a complex layer structure which became separated from the underlying glass as the reaction progressed. The formation of the layer and its effect on continued glass reaction are discussed with respect to the current model for glass reaction used in the EQ3/6 computer simulation. It is concluded that the layer formed after 278 days is not protective and may eventually become fractured and generate particulates that may be transported by liquid water. 5 refs., 5 figs. , 3 tabs

Full Text Available The oscillation susceptibility of the ADMIRE aircraft along the path of longitudinal flight equilibriums is analyzed numerically in the general and in a simplified flight model. More precisely, the longitudinal flight equilibriums, the stability of these equilibriums, and the existence of bifurcations along the path of these equilibriums are researched in both models. Maneuvers and appropriate piloting tasks for the touch-down moment are simulated in both models. The computed results obtained in the models are compared in order to see if the movement concerning the landing phase computed in the simplified model is similar to that computed in the general model. The similarity we find is not a proof of the structural stability of the simplified system, what as far we know never been made, but can increase the confidence that the simplified system correctly describes the real phenomenon.

A previously reported mathematical model for the initial chemical reaction fouling of a heated tube is critically examined in the light of the experimental data for which it was developed. A regression analysis of the model with respect to that data shows that the reference point upon which the two adjustable parameters of the model were originally based was well chosen, albeit fortuitously. (author). 3 refs., 2 tabs., 2 figs

Full Text Available The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA was applied to reveal the hidden (secondary effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model.

The study reported here tests a model that includes several factors thought to contribute to the comprehension of static multimedia learning materials (i.e. background knowledge, working memory, attention to components as measured with eye movement measures). The model examines the effects of working memory capacity, domain specific (biology) and…

A simple attenuation model (SAM) is presented for estimating rain-induced attenuation along an earth-space path. The rain model uses an effective spatial rain distribution which is uniform for low rain rates and which has an exponentially shaped horizontal rain profile for high rain rates. When compared to other models, the SAM performed well in the important region of low percentages of time, and had the lowest percent standard deviation of all percent time values tested.

This article discusses the process of creating the mathematical model of a radio-frequency path for an IEEE 802.11ah based wireless sensor networks using M atLab Simulink CAD tools. In addition, it describes occurring perturbing effects and determining the presence of a useful signal in the received mixture.

The major purpose of this study was to create a path analysis model of academic success in a group of university students, which included the variables of academic confidence and psychological capital with a mediator variable--academic coping. 400 undergraduates from Marmara University and Istanbul Commerce University who were in sophomore, junior…

This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.

Full Text Available The slime mold Dictyostelium discoideum is one of the model systems of biological pattern formation. One of the most successful answers to the challenge of establishing a spiral wave pattern in a colony of homogeneously distributed D. discoideum cells has been the suggestion of a developmental path the cells follow (Lauzeral and coworkers. This is a well-defined change in properties each cell undergoes on a longer time scale than the typical dynamics of the cell. Here we show that this concept leads to an inhomogeneous and systematic spatial distribution of spiral waves, which can be predicted from the distribution of cells on the developmental path. We propose specific experiments for checking whether such systematics are also found in data and thus, indirectly, provide evidence of a developmental path.

In recent years, job stress has been cited as a risk factor for some diseases. Given the importance of this subject, we established a new model for classifying job stress among Iranian male staff using path analysis. This cross-sectional study was done on male staff in Tehran, Iran, 2013. The participants in the study were selected using a proportional stratum sampling method. The tools used included nine questionnaires (1- HSE questionnaire; 2- GHQ questionnaire; 3- Beck depression inventory; 4- Framingham personality type; 5- Azad-Fesharaki's physical activity questionnaire; 6- Adult attachment style questionnaire; 7- Azad socioeconomic questionnaire; 8- Job satisfaction survey; and 9- demographic questionnaire). A total of 575 individuals (all male) were recruited for the study. Their mean (±SD) age was 33.49 (±8.9) and their mean job experience was 12.79 (±8.98) years. The pathway of job stress among Iranian male staff showed an adequate model fit (RMSEA=0.021, GFI=0.99, AGFI=0.97, P=0.136). In addition, the total effect of variables like personality type (β=0.283), job satisfaction (β=0.287), and age (β=0.108) showed a positive relationship with job stress, while variables like general health (β=-0.151) and depression (β=-0.242) showed the reverse effect on job stress. According to the results of this study, we can conclude that our suggested model is suited to explaining the pathways of stress among Iranian male staff.

Full Text Available The goal of this study is to propose a model for the development of employees' learning (career path in industrial enterprises. In the research a descriptive-survey method along with field research were used. The statistical population consists of all 110 employees of Dana Baspar's enterprises. A questionnaire with content verified by 30 experts along with the supervisor and the advisor was employed. Related validity of the questionnaires and their stability were 70%, counted by Cronbach's alpha, showing a good level of stability. Descriptive statistics for demographic data (frequency, standard deviation and mean and inferential statistics were used. Research findings show that the influence of workshop and experimental skills gained by apprenticeship, the effect of training by holding meetings and seminars with experts or experience acquired during work is significant and positive on organizational productivity while the mediation variable of professional skills exists but the influence of classic and academic trainings is not positive and significant on organizational productivity considering the existence of mediation variables.

/ Recreation satisfaction is a complex psychological construct that is difficult to define and measure. Recent approaches suggest that overall satisfaction may be a function of multiple satisfactions derived from specific elements of a recreation experience such as the situational characteristics of a recreation setting or activity and the recreationist's subjective evaluations of the experience. In this paper, a pathmodel of whitewater boating satisfaction was tested using data from a survey of 1210 commercial and 111 private boaters on the Cheat River of West Virginia. The pathmodel included the direct and mediating effects of situational variables and the subjective evaluations of boaters and explained 52% and 54% of the variation in satisfaction of commercial and private boaters, respectively. Factors related to the satisfaction of both groups included a composite variable representing opportunities for challenge, excitement, and skill testing on the river trip; water flow levels; and crowding perceptions. In combination, water flow level and boater's perceptions of opportunities to experience challenge, excitement, and test boating skills were the most important variables for explaining satisfaction of both groups. Additional factors affecting commercial, but not private, boater satisfaction included the motive of escaping the usual demands of life and a social interaction variable. Among private boaters, perceptions of the environmental conditions also contributed to overall satisfaction. The results support the multiple satisfaction approach of previous research. River management implications are discussed.KEY WORDS: Whitewater; River recreation; Satisfaction

We characterize the nature of the time dispersion in scintillation detectors caused by path length differences of the scintillation photons as they travel from their generation point to the photodetector. Using Monte Carlo simulation, we find that the initial portion of the distribution (which is the only portion that affects the timing resolution) can usually be modeled by an exponential decay. The peak amplitude and decay time depend both on the geometry of the crystal, the position within the crystal that the scintillation light originates from, and the surface finish. In a rectangular parallelpiped LSO crystal with 3 mm × 3 mm cross section and polished surfaces, the decay time ranges from 10 ps (for interactions 1 mm from the photodetector) up to 80 ps (for interactions 50 mm from the photodetector). Over that same range of distances, the peak amplitude ranges from 100% (defined as the peak amplitude for interactions 1 mm from the photodetector) down to 4% for interactions 50 mm from the photodetector. Higher values for the decay time are obtained for rough surfaces, but the exact value depends on the simulation details. Estimates for the decay time and peak amplitude can be made for different cross section sizes via simple scaling arguments. PMID:25729464

In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin pathmodels. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) {yields} f as f {yields} {infinity}, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) {yields} 2f as f {yields} {infinity}, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin pathmodels are also presented. These identities are the generalization of combinatorial identities obtained in directed pathsmodels to their natural trinomial counterparts.

In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin pathmodels. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin pathmodels are also presented. These identities are the generalization of combinatorial identities obtained in directed pathsmodels to their natural trinomial counterparts.

In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin pathmodels. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin pathmodels are also presented. These identities are the generalization of combinatorial identities obtained in directed pathsmodels to their natural trinomial counterparts.

The Percentage of Enrollment in Physics (PEP) at the secondary level nationally has been approximately 20% for the past few decades. For a more scientifically literate citizenry as well as specialists to continue scientific research and development, it is desirable that more students enroll in physics. Some of the predictor variables for physics enrollment and physics achievement that have been identified previously includes a community's socioeconomic status, the availability of physics, the sex of the student, the curriculum, as well as teacher and student data. This study isolated and identified predictor variables for PEP of secondary schools in New York. Data gathered by the State Education Department for the 1990-1991 school year was used. The source of this data included surveys completed by teachers and administrators on student characteristics and school facilities. A data analysis similar to that done by Bryant (1974) was conducted to determine if the relationships between a set of predictor variables related to physics enrollment had changed in the past 20 years. Variables which were isolated included: community, facilities, teacher experience, number of type of science courses, school size and school science facilities. When these variables were isolated, latent variable path diagrams were proposed and verified by the Linear Structural Relations computer modeling program (LISREL). These diagrams differed from those developed by Bryant in that there were more manifest variables used which included achievement scores in the form of Regents exam results. Two criterion variables were used, percentage of students enrolled in physics (PEP) and percent of students enrolled passing the Regents physics exam (PPP). The first model treated school and community level variables as exogenous while the second model treated only the community level variables as exogenous. The goodness of fit indices for the models was 0.77 for the first model and 0.83 for the second

To account for the turbulent temperature and species-concentration fluctuations, a model is presented on the effects of chemical reaction rates in computer analyses of turbulent reacting flows. The model results in two parameters which multiply the terms in the reaction-rate equations. For these two parameters, graphs are presented as functions of the mean values and intensity of the turbulent fluctuations of the temperature and species concentrations. These graphs will facilitate incorporation of the model into existing computer programs which describe turbulent reacting flows. When the model was used in a two-dimensional parabolic-flow computer code to predict the behavior of an experimental, supersonic hydrogen jet burning in air, some improvement in agreement with the experimental data was obtained in the far field in the region near the jet centerline. Recommendations are included for further improvement of the model and for additional comparisons with experimental data.

The shell model/R-matrix technique of calculating nuclear reaction cross sections for light projectiles incident on light nuclei is discussed, particularly in the application of the technique to thermonuclear reactions. Details are presented on the computational methods for the shell model which display how easily the calculations can be performed. Results of the shell model/R-matrix technique are discussed as are some of the problems encountered in picking an appropriate nucleon-nucleon interaction for the large model spaces which must be used for current problems. The status of our work on developing an effective nucleon-nucleon interaction for use in large-basis shell model calculations is presented. This new interaction is based on a combination of global constraints and microscopic nuclear data. 23 refs., 6 figs., 2 tabs

Sheet metal forming involves large strains and severe strain-path changes. Large plastic strains lead in many metals to the development of persistent dislocation structures resulting in strong flow anisotropy. This induced anisotropic behavior manifests itself in the case of a strain path change through very different stress-strain responses depending on the type of the strain-path change. While many metals exhibit a drop of the yield stress (Bauschinger effect) after a load reversal, some metals show an increase of the yield stress after an orthogonal strain-path change (so-called cross hardening). To model the Bauschinger effect, kinematic hardening has been successfully used for years. However, the usage of the kinematic hardening leads automatically to a drop of the yield stress after an orthogonal strain-path change contradicting tests exhibiting the cross hardening effect. Another effect, not accounted for in the classical elasto-plasticity, is the difference between the tensile and compressive strength, exhibited e.g. by some steel materials. In this work we present a phenomenological material model whose structure is motivated by polycrystalline modeling that takes into account the evolution of polarized dislocation structures on the grain level - the main cause of the induced flow anisotropy on the macroscopic level. The model considers besides the movement of the yield surface and its proportional expansion, as it is the case in conventional plasticity, also the changes of the yield surface shape (distortional hardening) and accounts for the pressure dependence of the flow stress. All these additional attributes turn out to be essential to model the stress-strain response of dual phase high strength steels subjected to non-proportional loading

Sheet metal forming involves large strains and severe strain-path changes. Large plastic strains lead in many metals to the development of persistent dislocation structures resulting in strong flow anisotropy. This induced anisotropic behavior manifests itself in the case of a strain path change through very different stress-strain responses depending on the type of the strain-path change. While many metals exhibit a drop of the yield stress (Bauschinger effect) after a load reversal, some metals show an increase of the yield stress after an orthogonal strain-path change (so-called cross hardening). To model the Bauschinger effect, kinematic hardening has been successfully used for years. However, the usage of the kinematic hardening leads automatically to a drop of the yield stress after an orthogonal strain-path change contradicting tests exhibiting the cross hardening effect. Another effect, not accounted for in the classical elasto-plasticity, is the difference between the tensile and compressive strength, exhibited e.g. by some steel materials. In this work we present a phenomenological material model whose structure is motivated by polycrystalline modeling that takes into account the evolution of polarized dislocation structures on the grain level - the main cause of the induced flow anisotropy on the macroscopic level. The model considers besides the movement of the yield surface and its proportional expansion, as it is the case in conventional plasticity, also the changes of the yield surface shape (distortional hardening) and accounts for the pressure dependence of the flow stress. All these additional attributes turn out to be essential to model the stress-strain response of dual phase high strength steels subjected to non-proportional loading

Characteristics of very low frequency (VLF) signal depends on solar illumination across the propagation path. For a long path, solar zenith angle varies widely over the path and this has a significant influence on the propagation characteristics. To study the effect, Indian Centre for Space Physics participated in the 27th and 35th Scientific Expedition to Antarctica. VLF signals transmitted from the transmitters, namely, VTX (18.2 kHz), Vijayanarayanam, India, and NWC (19.8 kHz), North West Cape, Australia, were recorded simultaneously at Indian permanent stations Maitri and Bharati having respective geographic coordinates 70.75°S, 11.67°E, and 69.4°S, 76.17°E. A very stable diurnal variation of the signal has been obtained from both the stations. We reproduced the signal variations of VLF signal using solar zenith angle model coupled with long wavelength propagation capability (LWPC) code. We divided the whole path into several segments and computed the solar zenith angle (χ) profile. We assumed a linear relationship between the Wait's exponential model parameters effective reflection height (h'), steepness parameter (β), and solar zenith angle. The h' and β values were later used in the LWPC code to obtain the VLF signal amplitude at a particular time. The same procedure was repeated to obtain the whole day signal. Nature of the whole day signal variation from the theoretical modeling is also found to match with our observation to some extent.

The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

The structure of MoS2/Al2O3 catalyst and the initial step of the hydrodesulfurization (HDS) reaction using an experimental model have been studied by in situ Raman-, infrared emission (IRE)-, inelastic electron tunneling (IET)-spectroscopy and thermal desorption measurements accompanied by molecular

The subsurface is an obscure but essential resource to life on Earth. It is an important region for carbon production and sequestration, a source and reservoir for energy, minerals and metals and potable water. There is a growing need to better understand subsurface possesses that control the exploitation and security of these resources. Our best models often fail to predict these processes at the field scale because of limited understanding of 1) the processes and the controlling parameters, 2) how processes are coupled at the field scale 3) geological heterogeneities that control hydrological, geochemical and microbiological processes at the field scale and 4) lack of data sets to calibrate and validate numerical models. There is a need for experimental data obtained at scales larger than those obtained at the laboratory bench that take into account the influence of hydrodynamics, geochemical reactions including complexation and chelation/adsorption/precipitation/ion exchange/oxidation-reduction/colloid formation and dissolution, and reactions of microbial origin. Furthermore, the coupling of each of these processes and reactions needs to be evaluated experimentally at a scale that produces data that can be used to calibrate numerical models so that they accurately describe field scale system behavior. Establishing the relevant experimental scale for collection of data from coupled processes remains a challenge and will likely be process-dependent and involve iterations of experimentation and data collection at different intermediate scales until the models calibrated with the appropriate date sets achieve an acceptable level of performance. Assuming that the geophysicists will soon develop technologies to define geological heterogeneities over a wide range of scales in the subsurface, geochemists need to continue to develop techniques to remotely measure abiotic reactions, while geomicrobiologists need to continue their development of complementary technologies

We present analytical results for the distribution of shortest path lengths (DSPL) in a network growth model which evolves by node duplication (ND). The model captures essential properties of the structure and growth dynamics of social networks, acquaintance networks, and scientific citation networks, where duplication mechanisms play a major role. Starting from an initial seed network, at each time step a random node, referred to as a mother node, is selected for duplication. Its daughter node is added to the network, forming a link to the mother node, and with probability p to each one of its neighbors. The degree distribution of the resulting network turns out to follow a power-law distribution, thus the ND network is a scale-free network. To calculate the DSPL we derive a master equation for the time evolution of the probability Pt(L =ℓ ) , ℓ =1 ,2 ,&ctdot; , where L is the distance between a pair of nodes and t is the time. Finding an exact analytical solution of the master equation, we obtain a closed form expression for Pt(L =ℓ ) . The mean distance 〈L〉 t and the diameter Δt are found to scale like lnt , namely, the ND network is a small-world network. The variance of the DSPL is also found to scale like lnt . Interestingly, the mean distance and the diameter exhibit properties of a small-world network, rather than the ultrasmall-world network behavior observed in other scale-free networks, in which 〈L〉 t˜lnlnt .

Full Text Available Introduction & Objective: The most important indicator of population growth is fertility. Fertil-ity is influenced by selection of individual, social, economic, demographic, cultural and bio-logical factors. The purpose of this study is to investigate the factors affecting fertility (num-ber of live births. Materials & Methods: This was a cross- sectional study of the correlation matrix, which is based on a sample of 500 households in the two-stage random sampling method. First, a question-naire was adjusted and then was provided to interviewers for recording some demographics information and birthrate. To examine the relationship between these variables and fertility model, the most important influential variables on fertility were selected based on the theo-retical model and using socio-economic and demographic variables influencing on fertility. The data were then analyzed by the path analysis using LISREL software. Results: Mean±SD of parity in 500 married women was 2.18±0.904 in Hamadan city. Among the variables, couple education (total effect -0.421 and the number of unwanted pregnancy (total effect 0.27 had the highest effect on fertility, respectively; while husband’s marriage age (total effect -0.00365 had the lowest effects on parity. Conclusion: This study shows that high education is a deterrent factor to live births and also shows that the rise of live births is unwanted among families. Also, it can be concluded from the findings of this study that culture of trend towards early marriage and childbearing which are associated with the promotion of education for women and men can significantly increase the pregnancy order to prevent the lack of active and aging population. (Sci J Hamadan Univ Med Sci 2015; 22 (2: 122-128

This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm-wave fre......This letter presents an empirical multi-frequency outdoor-to-indoor path loss model. The model is based on measurements performed on the exact same set of scenarios for different frequency bands ranging from traditional cellular allocations below 6 GHz (0.8, 2, 3.5 and 5.2 GHz), up to cm...

The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.

Traditionally, synchronization of concurrent processes is coded in line by operations on semaphores or similar objects. Path expressions move the...discussion about a variety of synchronization primitives . An analysis of their relative power is found in [3]. Path expressions do not introduce yet...another synchronization primitive . A path expression relates to such primitives as a for- or while-statement of an ALGOL-like language relates to a JUMP

Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

This article proposes and demonstrates how conjoint methods can be adapted to allow the modeling of managerial reactions to various changes in economic and competitive environments and their effects on observed sales levels. Because in general micro-level data on strategic decision making over time

GPS signals traveling through the earth's ionosphere are affected by charged particles that often disrupt the signal and the information it carries due to "scintillation", which resembles an extra noise source on the signal. These signals are also affected by weather changes, tropospheric scattering, and absorption from objects due to multi-path propagation of the signal. These obstacles cause distortion within information and fading of the signal, which ultimately results in phase locking errors and noise in messages. In this work, we attempted to replicate the distortion that occurs in GPS signals using a signal processing simulation model. We wanted to be able to create and identify scintillated signals so we could better understand the environment that caused it to become scintillated. Then, under controlled conditions, we simulated the receiver's ability to suppress scintillation in a signal. We developed a code in MATLAB that was programmed to: 1. Create a carrier wave and then plant noise (four different frequencies) on the carrier wave, 2. Compute a Fourier transform on the four different frequencies to find the frequency content of a signal, 3. Use a filter and apply it to the Fourier transform of the four frequencies and then compute a Signal-to-noise ratio to evaluate the power (in Decibels) of the filtered signal, and 4.Plot each of these components into graphs. To test the code's validity, we used user input and data from an AM transmitter. We determined that the amplitude modulated signal or AM signal would be the best type of signal to test the accuracy of the MATLAB code due to its simplicity. This code is basic to give students the ability to change and use it to determine the environment and effects of noise on different AM signals and their carrier waves. Overall, we were able to manipulate a scenario of a noisy signal and interpret its behavior and change due to its noisy components: amplitude, frequency, and phase shift.

A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

Turbulence-chemistry interactions are analysed using algebraic moment closure for the chemical reaction term. The coupling between turbulence and chemical length and time scales generate a complex interaction process. This interaction process is called structural effects in this work. The structural effects are shown to take place on all scales between the largest scale of turbulence and the scales of the molecular motions. The set of equations describing turbulent correlations involved in turbulent reacting flows are derived. Interactions are shown schematically using interaction charts. Algebraic equations for the turbulent correlations in the reaction rate are given using the interaction charts to include the most significant couplings. In the frame of fundamental combustion physics, the structural effects appearing on the small scales of turbulence are proposed modelled using a discrete spectrum of turbulent scales. The well-known problem of averaging the Arrhenius law, the specific reaction rate, is proposed solved using a presumed single variable probability density function and a sub scale model for the reaction volume. Although some uncertainties are expected, the principles are addressed. Fast chemistry modelling is shown to be consistent in the frame of algebraic moment closure when the turbulence-chemistry interaction is accounted for in the turbulent diffusion. The modelling proposed in this thesis is compared with experimental data for an laboratory methane flame and advanced probability density function modelling. The results show promising features. Finally it is shown a comparison with full scale measurements for an industrial burner. All features of the burner are captured with the model. 41 refs., 33 figs.

The On-machine measurement (OMM), which measures a work piece during or after the machining process in the machining center, has the advantage of measuring the work piece directly within the work space without moving it. However, the path generation procedure used to determine the measuring sequence and variables for the complex features of a target work piece has the limitation of requiring time-consuming tasks to generate the measuring points and mostly relies on the proficiency of the on-site engineer. In this study, we propose a touch-probe path generation method using similarity analysis between the feature vectors of three-dimensional (3-D) shapes for the OMM. For the similarity analysis between a new 3-D model and existing 3-D models, we extracted the feature vectors from models that can describe the characteristics of a geometric shape model; then, we applied those feature vectors to a geometric histogram that displays a probability distribution obtained by the similarity analysis algorithm. In addition, we developed a computer-aided inspection planning system that corrects non-applied measuring points that are caused by minute geometry differences between the two models and generates the final touch-probe path.

Dihadron azimuthal angle correlations relative to the reaction plane have been investigated in Au+Au collisions at √(s NN )=200 GeV using a multiphase transport model (AMPT). Such reaction plane azimuthal-angle-dependent correlations can shed light on the path-length effect of energy loss of high-transverse-momentum particles propagating through a hot dense medium. The correlations vary with the trigger particle azimuthal angle with respect to the reaction plane direction, φ s =φ T -Ψ EP , which is consistent with the experimental observation by the STAR Collaboration. The dihadron azimuthal angle correlation functions on the away side of the trigger particle present a distinct evolution from a single-peak to a broad, possibly double-peak structure when the trigger particle direction goes from in-plane to out-of-plane with the reaction plane. The away-side angular correlation functions are asymmetric with respect to the back-to-back direction in some regions of φ s , which could provide insight into the testing v 1 method for reconstructing the reaction plane. In addition, both the root-mean-square width (W rms ) of the away-side correlation distribution and the splitting parameter (D) between the away-side double peaks increase slightly with φ s , and the average transverse momentum of away-side-associated hadrons shows a strong φ s dependence. Our results indicate that a strong parton cascade and resultant energy loss could play an important role in the appearance of a double-peak structure in the dihadron azimuthal angular correlation function on the away side of the trigger particle.

The Weissler reaction in which iodide is oxidised to a tri-iodide complex (I(3)(-)) has been widely used for measurement of the intensity of ultrasonic and hydrodynamic cavitation. It was used in this work to compare ultrasonic cavitation at 24 kHz with hydrodynamic cavitation using two different devices, one a venturi and the other a sudden expansion, operated up to 8.7 bar. Hydrodynamic cavitation had a maximum efficiency of about 5 x 10(-11) moles of I(3)(-) per joule of energy compared with the maximum of almost 8 x 10(-11) mol J(-1) for ultrasonic cavitation. Hydrodynamic cavitation was found to be most effective at 10 degrees C compared with 20 degrees C and 30 degrees C and at higher upstream pressures. However, it was found that in hydrodynamic conditions, even without cavitation, I(3)(-) was consumed at a rapid rate leading to an equilibrium concentration. It was concluded that the Weissler reaction was not a good modelreaction for the assessment of the effectiveness of hydrodynamic cavitation.

Patients with type 1 and type 2 diabetes often find it difficult to control their blood glucose level on a daily basis because of distance or physical incapacity. With the increase in Internet-enabled smartphone use, this problem can be resolved by adopting a mobile diabetes monitoring system. Most existing studies have focused on patients' usability perceptions, whereas little attention has been paid to physicians' intentions to adopt this technology. The aim of the study was to evaluate the perceptions and user acceptance of mobile diabetes monitoring among Japanese physicians. A questionnaire survey of physicians was conducted in Japan. The structured questionnaire was prepared in a context of a mobile diabetes monitoring system that controls blood glucose, weight, physical activity, diet, insulin and medication, and blood pressure. Following a thorough description of mobile diabetes monitoring with a graphical image, questions were asked relating to system quality, information quality, service quality, health improvement, ubiquitous control, privacy and security concerns, perceived value, subjective norms, and intention to use mobile diabetes monitoring. The data were analyzed by partial least squares (PLS) pathmodeling. In total, 471 physicians participated from 47 prefectures across Japan, of whom 134 were specialized in internal and gastrointestinal medicine. Nine hypotheses were tested with both the total sample and the specialist subsample; results were similar for both samples in terms of statistical significance and the strength of path coefficients. We found that system quality, information quality, and service quality significantly affect overall quality. Overall quality determines the extent to which physicians perceive the value of mobile health monitoring. However, in contrast to our initial predictions, overall quality does not have a significant direct effect on the intention to use mobile diabetes monitoring. With regard to net benefits, both

In selecting the reasonable DBL on steam generator (SG), it is necessary to improve analytical method for estimating the sodium temperature on failure propagation due to overheating. Improvement on sodium-water reaction (SWR) jet code (LEAP-JET ver.1.30) and application analysis to the water injection tests for confirmation of code propriety were performed. On the improvement of the code, a gas-liquid interface area density model was introduced to develop a chemical reactionmodel with a little dependence on calculation mesh size. The test calculation using the improved code (LEAP-JET ver.1.40) were carried out with conditions of the SWAT-3·Run-19 test and an actual scale SG. It is confirmed that the SWR jet behavior on the results and the influence to analysis result of a model are reasonable. For the application analysis to the water injection tests, water injection behavior and SWR jet behavior analyses on the new SWAT-1 (SWAT-1R) and SWAT-3 (SWAT-3R) tests were performed using the LEAP-BLOW code and the LEAP-JET code. In the application analysis of the LEAP-BLOW code, parameter survey study was performed. As the results, the condition of the injection nozzle diameter needed to simulate the water leak rate was confirmed. In the application analysis of the LEAP-JET code, temperature behavior of the SWR jet was investigated. (author)

The Croton water supply system, responsible for supplying approximately 10% of New York City's water, provides an opportunity for exploration into the impacts of significant terrestrial flow path alteration upon receiving water quality. Natural flow paths are altered during residential development in order to allow for construction at a given location, reductions in water table elevation in low lying areas and to provide drainage of increased overland flow volumes. Runoff conducted through an artificial drainage system, is prevented from being attenuated by the natural environment, thus the pollutant removal capacity inherent in most natural catchments is often limited to areas where flow paths are not altered by development. By contrasting the impacts of flow path alterations in two small catchments in the Croton system, with different densities of residential development, we can begin to identify appropriate limits to the re-routing of runoff in catchments draining into surface water supplies. The Stormwater and Wastewater Management Model (SWMM) will be used as a tool to predict the runoff quantity and quality generated from two small residential catchments and to simulate the potential benefits of changes to the existing drainage system design, which may improve water quality due to longer residence times.

EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

Microscopic factors are the basis of macroscopic phenomena. We proposed a network analysis paradigm to study the macroscopic financial system from a microstructure perspective. We built the cointegration network model and the Granger causality network model based on econometrics and complex network theory and chose stock price time series of the real estate industry and its upstream and downstream industries as empirical sample data. Then, we analysed the cointegration network for understanding the steady long-term equilibrium relationships and analysed the Granger causality network for identifying the diffusion paths of the potential risks in the system. The results showed that the influence from a few key stocks can spread conveniently in the system. The cointegration network and Granger causality network are helpful to detect the diffusion path between the industries. We can also identify and intervene in the transmission medium to curb risk diffusion.

The numerical calculation of electron inelastic mean free path and stopping power from an optical-data model recently proposed by Fernandez-Varea et al. is described in detail. Explicit expressions for the one-electron total cross sections of the two-modes model of the free-electron gas and the δ-oscillator are derived. The inelastic mean free path and the stopping power are obtained as integrals of these one-electron total cross sections weighted by the optical as integrals of these one-electron total cross sections weighted by the optical oscillator strength. The integrals can be easily evaluated, with a selected accuracy, by using the FORTRAN 77 subroutine GABQ described here, which implements a 20-points Gauss adaptive bipartition quadrature method. Source listings of FORTRAN 77 subroutines to compute the one-electron total cross sections are also given

Due to the complexity of modeling the combustion process in nuclear power plants, the global mechanisms are preferred for numerical simulation. To quickly perform the highly resolved simulations with limited processing resources of large-scale hydrogen combustion, a method based on thermal theory was developed to obtain kinetic parameters of global reaction mechanism of hydrogen–air combustion in a wide range. The calculated kinetic parameters at lower hydrogen concentration (C{sub hydrogen} < 20%) were validated against the results obtained from experimental measurements in a container and combustion test facility. In addition, the numerical data by the global mechanism (C{sub hydrogen} > 20%) were compared with the results by detailed mechanism. Good agreement between the model prediction and the experimental data was achieved, and the comparison between simulation results by the detailed mechanism and the global reaction mechanism show that the present calculated global mechanism has excellent predictable capabilities for a wide range of hydrogen–air mixtures.

Due to the complexity of modeling the combustion process in nuclear power plants, the global mechanisms are preferred for numerical simulation. To quickly perform the highly resolved simulations with limited processing resources of large-scale hydrogen combustion, a method based on thermal theory was developed to obtain kinetic parameters of global reaction mechanism of hydrogen–air combustion in a wide range. The calculated kinetic parameters at lower hydrogen concentration (C hydrogen < 20%) were validated against the results obtained from experimental measurements in a container and combustion test facility. In addition, the numerical data by the global mechanism (C hydrogen > 20%) were compared with the results by detailed mechanism. Good agreement between the model prediction and the experimental data was achieved, and the comparison between simulation results by the detailed mechanism and the global reaction mechanism show that the present calculated global mechanism has excellent predictable capabilities for a wide range of hydrogen–air mixtures.

Processing satellite altimetry data requires the computation of path delayin the neutral atmosphere that is used for correcting ranges. The path delayis computed using numerical weather models and the accuracy of its computationdepends on the accuracy of numerical weather models. Accuracy of numerical modelsof numerical weather models over Antarctica and Greenland where there is a very sparse network of ground stations, is not well known. I used the dataset of GPS RO L1 data, computed predicted path delay for ROobservations using the numerical whether model GEOS-FPIT, formed the differences with observed path delay and used these differences for computationof the corrections to the a priori refractivity profile. These profiles wereused for computing corrections to the a priori zenith path delay. The systematic patter of these corrections are used for de-biasing of the the satellite altimetry results and for characterization of the systematic errorscaused by mismodeling atmosphere.

Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical

Context Under irradiation, a biological system undergoes a cascade of chemical reactions that can lead to an alteration of its normal operation. There are different types of radiation and many competing reactions. As a result the kinetics of chemical species is extremely complex. The simulation becomes then a powerful tool which, by describing the basic principles of chemical reactions, can reveal the dynamics of the macroscopic system. To understand the dynamics of biological systems under radiation, since the 80s there have been on-going efforts carried out by several research groups to establish a mechanistic model that consists in describing all the physical, chemical and biological phenomena following the irradiation of single cells. This approach is generally divided into a succession of stages that follow each other in time: (1) the physical stage, where the ionizing particles interact directly with the biological material; (2) the physico-chemical stage, where the targeted molecules release their energy by dissociating, creating new chemical species; (3) the chemical stage, where the new chemical species interact with each other or with the biomolecules; (4) the biological stage, where the repairing mechanisms of the cell come into play. This article focuses on the modeling of the chemical stage. Method This article presents a general method of speeding-up chemical reaction simulations in fluids based on the Smoluchowski equation and Monte-Carlo methods, where all molecules are explicitly simulated and the solvent is treated as a continuum. The model describes diffusion-controlled reactions. This method has been implemented in Geant4-DNA. The keys to the new algorithm include: (1) the combination of a method to compute time steps dynamically with a Brownian bridge process to account for chemical reactions, which avoids costly fixed time step simulations; (2) a k–d tree data structure for quickly locating, for a given molecule, its closest reactants. The

Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they ﬁx three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on ﬁssion and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-ﬁssion reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-ﬁt IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.

Full Text Available The article considers a catalyst granule with a porous ceramic passive substrate and point active centers on which an exothermic synthesis reaction occurs. A rate of the chemical reaction depends on the temperature according to the Arrhenius law. Heat is removed from the pellet surface in products of synthesis due to heat transfer. In our work we first proposed a model for calculating the steady-state temperature of a catalyst pellet with local reaction centers. Calculation of active centers temperature is based on the idea of self-consistent field (mean-field theory. At first, it is considered that powers of the reaction heat release at the centers are known. On the basis of the found analytical solution, which describes temperature distribution inside the granule, the average temperature of the reaction centers is calculated, which then is inserted in the formula for heat release. The resulting system of transcendental algebraic equations is transformed into a system of ordinary differential equations of relaxation type and solved numerically to achieve a steady-state value. As a practical application, the article considers a Fischer-Tropsch synthesis catalyst granule with active cobalt metallic micro-particles. Cobalt micro-particles are the centers of the exothermic reaction of hydrocarbons macromolecular synthesis. Synthesis occurs as a result of absorption of the components of the synthesis gas on metallic cobalt. The temperature distribution inside the granule for a single local center and reaction centers located on the same granule diameter is found. It was found that there is a critical temperature of reactor exceeding of which leads to significant local overheating of the centers - thermal explosion. The temperature distribution with the local reaction centers is qualitatively different from the granule temperature, calculated in the homogeneous approximation. It is shown that, in contrast to the homogeneous approximation, the

In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

Full Text Available A statistical model combined with CFD (computational fluid dynamic method was used to explain the detailed phenomena of the process parameters, and a series of experiments were carried out for propylene polymerisation by varying the feed gas composition, reaction initiation temperature, and system pressure, in a fluidised bed catalytic reactor. The propylene polymerisation rate per pass was considered the response to the analysis. Response surface methodology (RSM, with a full factorial central composite experimental design, was applied to develop the model. In this study, analysis of variance (ANOVA indicated an acceptable value for the coefficient of determination and a suitable estimation of a second-order regression model. For better justification, results were also described through a three-dimensional (3D response surface and a related two-dimensional (2D contour plot. These 3D and 2D response analyses provided significant and easy to understand findings on the effect of all the considered process variables on expected findings. To diagnose the model adequacy, the mathematical relationship between the process variables and the extent of polymer conversion was established through the combination of CFD with statistical tools. All the tests showed that the model is an excellent fit with the experimental validation. The maximum extent of polymer conversion per pass was 5.98% at the set time period and with consistent catalyst and co-catalyst feed rates. The optimum conditions for maximum polymerisation was found at reaction temperature (RT 75 °C, system pressure (SP 25 bar, and 75% monomer concentration (MC. The hydrogen percentage was kept fixed at all times. The coefficient of correlation for reaction temperature, system pressure, and monomer concentration ratio, was found to be 0.932. Thus, the experimental results and model predicted values were a reliable fit at optimum process conditions. Detailed and adaptable CFD results were capable

Recent studies of catalytic reactions subjected to fast forced temperature oscillations have revealed a rate enhancement increasing with temperature oscillation frequency. We present detailed studies of the rate enhancement up to frequencies of 2.5 Hz. A maximum in the rate enhancement is observed...... at about 1 Hz. A model for the rate enhancement that includes the surface kinetics and the dynamic partial pressure variations in the reactor is introduced. The model predicts a levelling off of the rate enhancement with frequency at about 1 Hz. The experimentally observed decrease above 1 Hz is explained...

Large scale computer simulations of model chemical systems play the role of idealized experiments in which theories may be tested. In this paper we present two applications of microscopic simulations based on the reactive hard sphere model. We investigate the influence of internal fluctuations on an oscillating chemical system and observe how they modify the phase portrait of it. Another application, we consider, is concerned with the propagation of a chemical wave front associated with a thermally activated reaction. It is shown that the nonequilibrium effects increase the front velocity if compared with the velocity of the front generated by a nonactivated process characterized by the same rate constant. (author)

This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

Full Text Available This article introduces a discrete reaction-diffusion-mechanics (dRDM model to study the effects of deformation on reaction-diffusion (RD processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material. Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

Drug safety is of great importance to public health. The detrimental effects of drugs not only limit their application but also cause suffering in individual patients and evoke distrust of pharmacotherapy. For the purpose of identifying drugs that could be suspected of causing adverse reactions, we present a structure-activity relationship analysis of adverse drug reactions (ADRs) in the central nervous system (CNS), liver, and kidney, and also of allergic reactions, for a broad variety of drugs (n = 507) from the Swiss drug registry. Using decision tree induction, a machine learning method, we determined the chemical, physical, and structural properties of compounds that predispose them to causing ADRs. The models had high predictive accuracies (78.9-90.2%) for allergic, renal, CNS, and hepatic ADRs. We show the feasibility of predicting complex end-organ effects using simple models that involve no expensive computations and that can be used (i) in the selection of the compound during the drug discovery stage, (ii) to understand how drugs interact with the target organ systems, and (iii) for generating alerts in postmarketing drug surveillance and pharmacovigilance.

Peripheral two-body reactions of the type K - p → M 0 + Λ, Σ 0 or Σ 0 (1385) are considered. Predictions based on the additive quark model and SU(6) baryon wave functions are tested against data on cross sections and polarisations for given momentum transfer. Data obtained in a high statistics experiment at 4.2 GeV/c K - momentum, as well as data from a large variety of other experiments are used. Highly significant violations of these predictions are observed in the data. These violations are shown to occur in a systematic fashion, according to which SU(6) must be relaxed, but the amplitude structure implied by additivity would remain valid. As an application an amplitude analysis for natural parity exchange reactions with M 0 = π, phi and rho respectively is performed, which determines a relative phase, which cannot be obtained in model-independent analysis. Also reactions with M 0 = delta or B are considered, and some implications for coupling constants are discussed. (Auth.)

The oxidation of gaseous propane under gamma radiolysis was studied at 100 torr pressure and 25 o C, at oxygen pressures from 1 to 15 torr. Major oxygen-containing products and their G-values with 10% added oxygen are as follows: acetone, 0.98; i-propyl alcohol, 0.86; propionaldehyde, 0.43; n-propyl alcohol, 0.11; acrolein, 0.14; and allyl alcohol, 0.038. The formation of major oxygen-containing products was explained on the basis that the alkyl radicals combine with molecular oxygen to give peroxyl radicals; the peroxyl radicals react with one another to give alkoxyl radicals, which in turn react with one another to form carbonyl compounds and alcohols. The reaction scheme for the formation of major products was examined using computer modeling based on a mechanism involving 28 reactions. Yields could be brought into agreement with the data within experimental error in nearly all cases. (author)

Beryllium will be used as first-wall material for the future fusion reactor ITER as well as in the breeding blanket of DEMO. In both cases it is important to understand the mechanisms of hydrogen retention in beryllium. In earlier experiments with beryllium low-energy binding states of hydrogen were observed by thermal desorption spectroscopy (TDS) which are not yet well understood. Two candidates for these states are considered: beryllium-hydride phases within the bulk and surface effects. The retention of deuterium in beryllium is studied by a reaction rate approach using a coupled reaction diffusion system (CRDS)-model relying on ab initio data from density functional theory calculations (DFT). In this contribution we try to assess the influence of surface recombination.

Full Text Available Transportation is the major contributor of ever-increasing CO2 and Greenhouse Gas emissions in cities. The ever-increasing hazardous emissions of transportation and energy consumption have persuaded transportation and urban planners to motivate people to non-motorized mode of travel, especially walking. Currently, there are several urban walkability assessment models; however, coping with a limited range of walkability assessment variables make these models not fully able to promote inclusive walkable urban neighborhoods. In this regard, this study develops the path walkability assessment (PWA index model which evaluates and analyzes path walkability in association with the pedestrian’s decision-tree-making (DTM. The model converts the pedestrian’s DTM qualitative data to quantifiable values. This model involves ninety-two (92 physical and environmental walkability assessment variables clustered into three layers of DTM (Layer 1: features; Layer 2: Criteria; and Layer 3: Sub-Criteria, and scoped to shopping and retail type of walking. The PWA model as a global decision support tool can be applied in any neighborhood in the world, and this study implements it at Taman Universiti neighborhood in Skudai, Malaysia. The PWA model has established the walkability score index which determines the grading rate of walkability accomplishment for each walkability variable of the under-survey neighborhood. Using the PWA grading index enables urban designers to manage properly the financial resource allocation for inspiring walkability in the targeted neighborhood.

We present a systematic study of the decarboxylation step of the enzyme aspartate decarboxylase with the purpose of assessing the quantum chemical cluster approach for modeling this important class of decarboxylase enzymes. Active site models ranging in size from 27 to 220 atoms are designed, and the barrier and reaction energy of this step are evaluated. To model the enzyme surrounding, homogeneous polarizable medium techniques are used with several dielectric constants. The main conclusion is that when the active site model reaches a certain size, the solvation effects from the surroundings saturate. Similar results have previously been obtained from systematic studies of other classes of enzymes, suggesting that they are of a quite general nature.

Lignin, a readily available form of biomass, awaits novel chemistry for converting it to valuable aromatic chemicals. Recent work has demonstrated that ionic liquids are excellent solvents for processing woody biomass and lignin. Seeking to exploit ionic liquids as media for depolymerization of lignin, we investigated reactions of lignin model compounds in these solvents. Using Brønsted acid catalysts in 1-ethyl-3-methylimidazolium triflate at moderate temperatures, we obtained up to 11.6% yield of the dealkylation product guaiacol from the model compound eugenol and cleaved phenethyl phenyl ether, a model for lignin ethers. Despite these successes, acid catalysis failed in dealkylation of the unsaturated model compound 4-ethylguaiacol and did not produce monomeric products from organosolv lignin, demonstrating that further work is required to understand the complex chemistry of lignin depolymerization.

This paper reports on an interview study of 18 Grade 10-12 students' model-based reasoning of a chemical reaction: the reaction of magnesium and oxygen at the submicro level. It has been proposed that chemical reactions can be conceptualised using two models: (i) the "particle model," in which a reaction is regarded as the simple…

This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M. partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M. species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing NE equilibrium reactions and a set of reactive transport equations of M-NE kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions

This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.

To test whether paired-play will induce longer path length and ranges of movement of the center of pressure (COP), which reflects on balance performance and stability, compared to solo-play and to test the difference in the path length and ranges of movement of the COP while playing the virtual reality (VR) game with the dominant hand compared to playing it with the nondominant hand. In this cross-sectional study 20 children (age 6.1 ± 0.7 years old) played an arm movement controlled VR game alone and with a peer while each of them stood on a pressure measuring pad to track the path length and ranges of movement of the COP. The total COP path was significantly higher during the paired-play (median 295.8 cm) compared to the COP path during the solo-play (median 189.2 cm). No significant differences were found in the reaction time and the mediolateral and anterior-posterior COP ranges between solo-play and paired-play. No significant differences were found between the parameters extracted during paired-play with the dominant or nondominant hand. Our findings imply that the paired-play is advantageous compared to solo-play since it induces a greater movement for the child, during which, higher COP velocities are reached that may contribute to improving the balance control of the child. Apart from the positive social benefits of paired-play, this positive effect on the COP path length is a noteworthy added value in the clinical setting when treating children with balance disorder.

Statistical analysis of rain fade duration is crucial information for system engineer to design and plan a fade mitigation technique (FMT) for the satellite communication system. An investigation is carried out based on data measured of one year period in Kuala Lumpur, Malaysia from satellite path of MEASAT3. This paper presents statistical analysis of measured fade duration on high elevation angle (77.4°) in Ku-band compared to three prediction models of fade duration. It is found that none of the models could predict measured fade duration distribution accurately

catalysis depends on a delicate energy balance. Radical-based enzyme reactions are often difficult to probe experimentally, so theoretical investigations have a particularly valuable role to play in their study. Our research demonstrates that a small-model approach can provide important and revealing insights into the mechanism of action of AdoCbl-dependent enzymes.

Monte Carlo computer simulations are in use at a number of laboratories for calculating time-dependent yields, which can be compared with experiments in the radiolysis of water. We report here on calculations to investigate the validity and consistency of the procedures used for simulating chemical reactions in our code, RADLYS. Model calculations were performed of the rate constants themselves. The rates thus determined showed an expected rapid decline over the first few hundred ps and a very gradual decline thereafter out to the termination of the calculations at 4.5 ns. Results are reported for different initial concentrations and numbers of reactive species. Generally, the calculated rate constants are smallest when the initial concentrations of the reactants are largest. It is found that inhomogeneities that quickly develop in the initial random spatial distribution of reactants persist in time as a result of subsequent chemical reactions, and thus conditions may poorly approximate those assumed from diffusion theory. We also investigated the reaction of a single species of one type placed among a large number of randomly distributed species of another type with which it could react. The distribution of survival times of the single species was calculated by using three different combinations of the diffusion constants for the two species, as is sometimes discussed in diffusion theory. The three methods gave virtually identical results. (orig.)

For studying the possible reactions of high density deuterons on the background of a degenerate electron gas, a summary of experimental observations resulted in the possibility of reactions in pm distance and more than ksec duration similar to the K-shell electron capture [1]. The essential reason was the screening of the deuterons by a factor of 14 based on the observations. Using the bosonic properties for a cluster formation of the deuterons and a model of compound nuclear reactions [2], the measured distribution of the resulting nuclei may be explained as known from the Maruhn-Greiner theory for fission. The local maximum of the distribution at the main minimum indicates the excited states of the compound nuclei during their intermediary state. This measured local maximum may be an independent proof for the deuteron clusters at LENR. [1] H. Hora, G.H. Miley et al. Physics Letters A175, 138 (1993) [2] H. Hora and G.H. Miley, APS March Meeting 2007, Program p. 116

Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.

Full Text Available Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality.

We investigated the dependence of the spreading critical exponents and the ultimate survival probability exponent on the initial configuration of a nonequilibrium catalytic reactionmodel. The model considers the competitive reactions between two different monomers, A and B, where we take into account the energy couplings between nearest neighbor monomers, and the adsorption energies, as well as the temperature T of the catalyst. For each value of T the model shows distinct absorbing states, with different concentrations of the two monomers. Employing an epidemic analysis, we established the behavior of the spreading exponents as we started the Monte Carlo simulations with different concentrations of the monomers. The exponents were determined as a function of the initial concentration ρ A, ini of A monomers. We have also considered initial configurations with correlations for a fixed concentration of A monomers. From the determination of three spreading exponents, and the ultimate survival probability exponent, we checked the validity of the generalized hyperscaling relation for a continuous set of initial states, random and correlated, which are dependent on the temperature of the catalyst

The design of fusion reactors will require information about a large number of neutron cross sections in the MeV region. Because of the obvious experimental difficulties, it is probable that not all of the cross sections of interest will be measured. Current direct and pre-equilibrium models can be used to calculate non-statistical contributions to neutron cross sections from information available from charged particle reaction studies; these are added to the calculated statistical contribution. Estimates of the reliability of such calculations can be derived from comparisons with the available data. (3 tables, 12 figures) (U.S.)

The pairing-vibration model with isospin is extended to include α-transfer reactions. Selection rules and expressions for transition strengths are derived and compared with experimental results for A = 40--66 nuclei. The selection rules are found to be followed quite well in the examples studied. The systematics of ground-state transition strengths are qualitatively quite well reproduced although the quantitative agreement is poor. When the changing nature of the pairing quanta is incorporated using two-particle transfer data the agreement becomes quantitatively good. Evidence is presented for clustering other than that due to pairing in 40 Ca and 44 Ti

With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

An exact solution of the transient dynamics of an associative memory model storing an infinite number of limit cycles with l finite steps is shown by means of the path-integral analysis. Assuming the Maxwell construction ansatz, we have succeeded in deriving the stationary state equations of the order parameters from the macroscopic recursive equations with respect to the finite-step sequence processing model which has retarded self-interactions. We have also derived the stationary state equations by means of the signal-to-noise analysis (SCSNA). The signal-to-noise analysis must assume that crosstalk noise of an input to spins obeys a Gaussian distribution. On the other hand, the path-integral method does not require such a Gaussian approximation of crosstalk noise. We have found that both the signal-to-noise analysis and the path-integral analysis give completely the same result with respect to the stationary state in the case where the dynamics is deterministic, when we assume the Maxwell construction ansatz. We have shown the dependence of the storage capacity (α c ) on the number of patterns per one limit cycle (l). At l = 1, the storage capacity is α c = 0.138 as in the Hopfield model. The storage capacity monotonically increases with the number of steps, and converges to α c = 0.269 at l ≅ 10. The original properties of the finite-step sequence processing model appear as long as the number of steps of the limit cycle has order l = O(1)

Remediation of contaminated soils from heavy metals can be accomplished by subjecting the soil to an electric DC field. In an electric field dissolved metals will move to either the cathode or the anode depending on their charges. During the course of remediation, precipitated and sorbed species will dissolve as the solute is depleted. Our previous remediation experiments on kaolinite soil and sandy loam show high remediation efficiency. In new experiments we studied the reaction and transport of copper in sand and sand/bentonite mixtures with a constant applied potential. For clays with high pH buffer capacity and cation exchange capacity the results were not satisfying, because of insufficient desorption of the metals from the clay. The parameters measured at different time intervals were potential gradient, current density, pH and metal concentration. We present a mathematical and numerical model that is used for interpretation of the results from the remediation experiments. The model uses electromigration and diffusion to describe the transport of heavy metals and other ions. The remediation experiments are supplemented by batch experiments used to assess the acid neutralisation capacity and sorption distribution coefficients at different pH's for the heavy metal ions. These are essential data needed for the modelling and can be used to assess if a remediation could be accomplished within reasonable time. The results show that the reaction data used to explain acid neutralisation capacity estimated in batch experiments can be used to model the main trends of the development of the current density and the potential profile. However the pH profile and the free copper concentration can not be modelled with this equilibrium description. (orig.)

Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

Nowadays, the concerns about the environment and the needs to increase the productivity at low costs, demand for the search of new ways to produce compounds with industrial interest. Based on the increasing knowledge of biological processes, through genome sequencing projects, and high-throughput experimental techniques as well as the available computational tools, the use of microorganisms has been considered as an approach to produce desirable compounds. However, this usually requires to manipulate these organisms by genetic engineering and/ or changing the enviromental conditions to make the production of these compounds possible. In many cases, it is necessary to enrich the genetic material of those microbes with hereologous pathways from other species and consequently adding the potential to produce novel compounds. This paper introduces a new plug-in for the OptFlux Metabolic Engineering platform, aimed at finding suitable sets of reactions to add to the genomes of selected microbes (wild type strain), as well as finding complementary sets of deletions, so that the mutant becomes able to overproduce compounds with industrial interest, while preserving their viability. The necessity of adding reactions to the metabolic model arises from existing gaps in the original model or motivated by the productions of new compounds by the organism. The optimization methods used are metaheuristics such as Evolutionary Algorithms and Simulated Annealing. The usefulness of this plug-in is demonstrated by a case study, regarding the production of vanillin by the bacterium E. coli.

The objective of the research is to analyze pathways of reactions of hydrogen with oxides of carbon over sulfides, and to predict which characteristics of the sulfide catalyst (nature of metal, defect structure) give rise to the lowest barriers toward oxygenated hydrocarbon product. Reversal of these pathways entails the generation of hydrogen, which is also proposed for study. In this first year of study, adsorption reactions of H atoms and H{sub 2} molecules with MoS{sub 2}, both in molecular and solid form, have been modeled using high-level density functional theory. The geometries and strengths of the adsorption sites are described and the methods used in the study are described. An exposed MO{sup IV} species modeled as a bent MoS{sub 2} molecule is capable of homopolar dissociative chemisorption of H{sub 2} into a dihydride S{sub 2}MoH{sub 2}. Among the periodic edge structures of hexagonal MoS{sub 2}, the (1{bar 2}11) edge is most stable but still capable of dissociating H{sub 2}, while the basal plane (0001) is not. A challenging task of theoretically accounting for weak bonding of MoS{sub 2} sheets across the Van der Waals gap has been addressed, resulting in a weak attraction of 0.028 eV/MoS{sub 2} unit, compared to the experimental value of 0.013 eV/MoS{sub 2} unit.

To prove that two-layer, TBP-nitric acid mixtures can be safely stored in the canyon evaporators, it must be demonstrated that a runaway reaction between TBP and nitric acid will not occur. Previous bench-scale experiments showed that, at typical evaporator temperatures, this reaction is endothermic and therefore cannot run away, due to the loss of heat from evaporation of water in the organic layer. However, the reaction would be exothermic and could run away if the small amount of water in the organic layer evaporates before the nitric acid in this layer is consumed by the reaction. Provided that there is enough water in the aqueous layer, this would occur if the organic layer is sufficiently thick so that the rate of loss of water by evaporation exceeds the rate of replenishment due to mixing with the aqueous layer. This report presents measurements of mass transfer rates for the mixing of water and butanol in two-layer, TBP-aqueous mixtures, where the top layer is primarily TBP and the bottom layer is comprised of water or aqueous salt solution. Mass transfer coefficients are derived for use in the modeling of two-layer TBP-nitric acid oxidation experiments. Three cases were investigated: (1) transfer of water into the TBP layer with sparging of both the aqueous and TBP layers, (2) transfer of water into the TBP layer with sparging of just the TBP layer, and (3) transfer of butanol into the aqueous layer with sparging of both layers. The TBP layer was comprised of 99% pure TBP (spiked with butanol for the butanol transfer experiments), and the aqueous layer was comprised of either water or an aluminum nitrate solution. The liquid layers were air sparged to simulate the mixing due to the evolution of gases generated by oxidation reactions. A plastic tube and a glass frit sparger were used to provide different size bubbles. Rates of mass transfer were measured using infrared spectrophotometers provided by SRTC/Analytical Development

Since treatments for wall boundaries and flows around complex paths are issues in LES modeling, a literature research on the LES methods for wall boundaries and applications to flows at complex paths was conducted to investigate the latest trend. Publications of domestic or international societies, workshops, symposiums, and journals about for past 3 years (2001-2004) were searched and collected, from which 23 research papers were selected and investigated. For the investigation, the treatments for wall boundaries used in the literature were classified roughly into five methods, i.e. (1) no-slip condition, (2) algebraic wall model (wall function), (3) wall model based on boundary-layer approximations (differential equation wall model), (4) hybrid method, (5) immersed boundary method. No-slip conditions were widely applied in recent works. For algebraic wall models, new wall functions that considered the effect of the velocity component vertical to a wall or circulation regions were examined. There were also some researches that devised the process of calculating the wall-shear stress with a conventional wall function. The researches using differential equation wall models presented the dynamic modification of model coefficients, or the application of high-order turbulence model such as the k-e model to the solution of Navier-Stokes equation in the boundary layer. The researches of hybrid methods focused on the discontinuity of velocity and eddy viscosity at the LES/RANS interface. Several researches that adopted immersed boundary methods for Cartesian girds with curved wall boundaries introduced the investigation of the Poisson solvers and the numerical modification of pressure boundary conditions. Many of investigated researches used hybrid methods. Thus, it is expected that they will be mainly applied to large-scale and complex simulations if the standard treatment for the discontinuity at the interface is developed. (author)

The relativistic microscopic optical potential, the Schroedinger equivalent potential, and mean free paths of a nucleon at finite temperature in nuclear matter and finite nuclei are studied based on Walecka's model and thermo-field dynamics. We let only the Hartree-Fock self-energy of a nucleon represent the real part of the microscopic optical potential and the fourth order of meson exchange diagrams, i.e. the polarization diagrams represent the imaginary part of the microscopic optical potential in nuclear matter. The microscopic optical potential of finite nuclei is obtained by means of the local density approximation. (orig.)

During subsurface transport, reactive solutes are subject to a variety of hydrodynamic and chemical processes. The major hydrodynamic processes include advection and convection, dispersion and diffusion. The key chemical processes are complexation including hydrolysis and acid-base reactions, dissolution-precipitation, reduction-oxidation, adsorption and ion exchange. The combined effects of all these processes on solute transport must satisfy the principle of conservation of mass. The statement of conservation of mass for N mobile species leads to N partial differential equations. Traditional solute transport models often incorporate the effects of hydrodynamic processes rigorously but oversimplify chemical interactions among aqueous species. Sophisticated chemical equilibrium models, on the other hand, incorporate a variety of chemical processes but generally assume no-flow systems. In the past decade, coupled models accounting for complex hydrological and chemical processes, with varying degrees of sophistication, have been developed. The existing models of reactive transport employ two basic sets of equations. The transport of solutes is described by a set of partial differential equations, and the chemical processes, under the assumption of equilibrium, are described by a set of nonlinear algebraic equations. An important consideration in any approach is the choice of primary dependent variables. Most existing models cannot account for the complete set of chemical processes, cannot be easily extended to include mixed chemical equilibria and kinetics, and cannot handle practical two and three dimensional problems. The difficulties arise mainly from improper selection of the primary variables in the transport equations. (Author) 38 refs

We explore possible pathways for the creation of ultracold polar NaK molecules in their absolute electronic and rovibrational ground state starting from ultracold Feshbach molecules. In particular, we present a multichannel analysis of the electronic ground and K(4p)+Na(3s) excited-state manifold of NaK, analyze the spin character of both the Feshbach molecular state and the electronically excited intermediate states and discuss possible coherent two-photon transfer paths from Feshbach molecules to rovibronic ground-state molecules. The theoretical study is complemented by the demonstration of stimulated Raman adiabatic passage from the X1Σ+(v=0) state to the a3Σ+ manifold on a molecular beam experiment.

This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.

Electron Transfer Reactions deals with the mechanisms of electron transfer reactions between metal ions in solution, as well as the electron exchange between atoms or molecules in either the gaseous or solid state. The book is divided into three parts. Part 1 covers the electron transfer between atoms and molecules in the gas state. Part 2 tackles the reactionpaths of oxidation states and binuclear intermediates, as well as the mechanisms of electron transfer. Part 3 discusses the theories and models of the electron transfer process; theories and experiments involving bridged electron transfe

The oxidation of gaseous propane under gamma radiolysis was studied at 100 torr pressure and 25°C, at oxygen pressures from 1 to 15 torr. Major oxygen-containing products and their G-values with 10% added oxygen are as follows: acetone, 0.98; i-propyl alcohol, 0.86; propionaldehyde, 0.43; n-propyl alcohol, 0.11; acrolein, 0.14; and allyl alcohol, 0.038. Minor products include i-butyl alcohol, t-amyl alcohol, n-butyl alcohol, n-amyl alcohol, and i-amyl alcohol. Small yields of i-hexyl alcohol and n-hexyl alcohol were also observed. There was no apparent difference in the G-values at pressures of 50, 100 and 150 torr. When the oxygen concentration was decreased below 5%, the yields of acetone, i-propyl alcohol, and n-propyl alcohol increased, the propionaldehyde yield decreased, and the yields of other products remained constant. The formation of major oxygen-containing products was explained on the basis that the alkyl radicals combine with molecular oxygen to give peroxyl radicals; the peroxyl radicals react with one another to give alkoxyl radicals, which in turn react with one another to form carbonyl compounds and alcohols. The reaction scheme for the formation of major products was examined using computer modeling based on a mechanism involving 28 reactions. Yields could be brought into agreement with the data within experimental error in nearly all cases.

Charge separation occurs in a pair of tightly coupled chlorophylls at the heart of photosynthetic reaction centers of both plants and bacteria. Recently it has been shown that quantum coherence can, in principle, enhance the efficiency of a solar cell, working like a quantum heat engine. Here, we propose a biological quantum heat engine (BQHE) motivated by Photosystem II reaction center (PSII RC) to describe the charge separation. Our model mainly considers two charge-separation pathways which is more than that typically considered in the published literature. We explore how these cross-couplings increase the current and power of the charge separation and discuss the effects of multiple pathways in terms of current and power. The robustness of the BQHE against the charge recombination in natural PSII RC and dephasing induced by environments is also explored, and extension from two pathways to multiple pathways is made. These results suggest that noise-induced quantum coherence helps to suppress the influence of acceptor-to-donor charge recombination, and besides, nature-mimicking architectures with engineered multiple pathways for charge separations might be better for artificial solar energy devices considering the influence of environments.

During gaseous diffusion plant operations, conditions leading to the formation of flammable gas mixtures may occasionally arise. Currently, these could consist of the evaporative coolant CFC-114 and fluorinating agents such as F2 and ClF3. Replacement of CFC-114 with a non-ozone-depleting substitute is planned. Consequently, in the future, the substitute coolant must also be considered as a potential fuel in flammable gas mixtures. Two questions of practical interest arise: (1) can a particular mixture sustain and propagate a flame if ignited, and (2) what is the maximum pressure that can be generated by the burning (and possibly exploding) gas mixture, should it ignite? Experimental data on these systems, particularly for the newer coolant candidates, are limited. To assist in answering these questions, a mathematical model was developed to serve as a tool for predicting the potential detonation pressures and for estimating the composition limits of flammability for these systems based on empirical correlations between gas mixture thermodynamics and flammability for known systems. The present model uses the thermodynamic equilibrium to determine the reaction endpoint of a reactive gas mixture and uses detonation theory to estimate an upper bound to the pressure that could be generated upon ignition. The model described and documented in this report is an extended version of related models developed in 1992 and 1999

Pain is a complex phenomenon not easily discerned from psychological, social, and environmental characteristics and is an oft cited barrier to return to work for people experiencing low back pain (LBP). The purpose of this study was to evaluate a path-analytic mediation model to examine how motivational enhancement physiotherapy, which incorporates tenets of motivational interviewing, improves physical functioning of patients with chronic LBP. Seventy-six patients with chronic LBP were recruited from the outpatient physiotherapy department of a government hospital in Hong Kong. The re-specified path-analytic model fit the data very well, χ (2)(3, N = 76) = 3.86, p = .57; comparative fit index = 1.00; and the root mean square error of approximation = 0.00. Specifically, results indicated that (a) using motivational interviewing techniques in physiotherapy was associated with increased working alliance with patients, (b) working alliance increased patients' outcome expectancy and (c) greater outcome expectancy resulted in a reduction of subjective pain intensity and improvement in physical functioning. Change in pain intensity also directly influenced improvement in physical functioning. The effect of motivational enhancement therapy on physical functioning can be explained by social-cognitive factors such as motivation, outcome expectancy, and working alliance. The use of motivational interviewing techniques to increase outcome expectancy of patients and improve working alliance could further strengthen the impact of physiotherapy on rehabilitation outcomes of patients with chronic LBP.

A vertically pointing 3.2-cm radar is used to observe altostratus and cirrus clouds as they pass overhead. Radar reflectivities are used in combination with an empirical Z(sub i)-IWC (ice water content) relationship developed by Sassen (1987) to parameterize IWC, which is then integrated to obtain estimates of ice water path (IWP). The observed dataset is segregated into all-ice and mixed-phase periods using measurements of integrated liquid water paths (LWP) detected by a collocated, dual-channel microwave radiometer. The IWP values for the all ice periods are compared to measurements of infrared (IR) downward fluxes measured by a collocated narrowband (9.95-11.43 microns) IR radiometer, which results in scattergrams representing the observed dependence of IR fluxes on IWP. A two-stream model is used to calculate the infrared fluxes expected from ice clouds with boundary conditions specified by the actual clouds, and similar curves relating IWP and infrared fluxes are obtained. The model and observational results suggest that IWP is one of the primary controls on infrared thermal fluxes for ice clouds.

This paper employs path creation as a lens to follow the emergence of the Danish wind turbine cluster. Supplier competencies, regulations, user preferences and a market for wind power did not pre-exist; all had to emerge in a tranformative manner involving multiple actors and artefacts. Competenc......This paper employs path creation as a lens to follow the emergence of the Danish wind turbine cluster. Supplier competencies, regulations, user preferences and a market for wind power did not pre-exist; all had to emerge in a tranformative manner involving multiple actors and artefacts....... Competencies emerged through processes and mechanisms such as co-creation that implicated multiple learning processes. The process was not an orderly linear one as emergent contingencies influenced the learning processes. An implication is that public policy to catalyse clusters cannot be based...

Full Text Available This paper presents the design of a model-based controller for the diesel engine air-path system. The controller is implemented based on a reduced order model consisting of only pressure and power dynamics with practical concerns. To deal with the model uncertainties effectively, a sliding mode controller, which is robust to model uncertainties, is proposed for the air-path system. The control performance of the proposed control scheme is verified through simulation with the valid plant model of a 6,000cc heavy duty diesel engine.

This paper compares three candidate large-scale propagation path loss models for use over the entire microwave and millimeter-wave (mmWave) radio spectrum: the alpha–beta–gamma (ABG) model, the close-in (CI) free-space reference distance model, and the CI model with a frequency-weighted path loss...... exponent (CIF). Each of these models has been recently studied for use in standards bodies such as 3rd Generation Partnership Project (3GPP) and for use in the design of fifth generation wireless systems in urban macrocell, urban microcell, and indoor office and shopping mall scenarios. Here, we compare...

In recent years several statistical theories have been developed concerning multistep direct (MSD) nuclear reactions. In addition, dominant in applications is a whole class of semiclassical models that may be subsumed under the heading of 'generalized exciton models'. These are basically MSD-type extensions on top of compound-like concepts. In this report the relationship between their underlying statistical MSD-postulates is highlighted. A command framework is outlined that enables to generate the various MSD theories through assigning statistical properties to different parts of the nuclear Hamiltonian. Then it is shown that distinct forms of nuclear randomness are embodied in the mentioned theories. All these theories appear to be very similar at a qualitative level. In order to explain the high energy-tails and forward-peaked angular distribution typical for particles emitted in MSD reactions, it is imagined that the incident continuum particle stepwise looses its energy and direction in a sequence of collisions, thereby creating new particle-hole pairs in the target system. At each step emission may take place. The statistical aspect comes in because many continuum states are involved in the process. These are supposed to display chaotic behavior, the associated randomness assumption giving rise to important simplifications in the expression for MSD emission cross sections. This picture suggests that mentioned MSD models can be interpreted as a variant of essentially one and the same theory. However, this appears not to be the case. To show this usual MSD distinction within the composite reacting nucleus between the fast continuum particle and the residual interactions, the nucleons of the residual core are to be distinguished from those of the leading particle with the residual system. This distinction will turn out to be crucial to present analysis. 27 refs.; 5 figs.; 1 tab

HIV stigma is rooted in culture and, therefore, it is essential to investigate it within the context of culture. The objective of this study was to examine the interrelationships among individualism-collectivism, HIV stigma, and social network support. A social network study was conducted among 118 people living with HIVAIDS in China, who were infected by commercial plasma donation, a nonstigmatized behavior. The Individualism-Collectivism Interpersonal Assessment Inventory (ICIAI) was used to measure cultural norms and values in the context of three social groups, family members, friends, and neighbors. Path analyses revealed (1) a higher level of family ICIAI was significantly associated with a higher level of HIV self-stigma (β=0.32); (2) a higher level of friend ICIAI was associated with a lower level of self-stigma (β=-035); (3) neighbor ICIAI was associated with public stigma (β=-0.61); (4) self-stigman was associated with social support from neighbors (β=-0.27); and (5) public stigma was associated with social support from neighbors (β=-0.24). This study documents that HIV stigma may mediate the relationship between collectivist culture and social network support, providing an empirical basis for interventions to include aspects of culture into HIV intervention strategies.

The storage and delivery system (SDS) stores the hydrogen isotopes and delivers them to the fuel injection system. Depleted uranium (DU) was chosen as a hydrogen isotope storage material. The hydrogen isotopes stored in the SDS are in the form of DU hydride confined in the primary and secondary containment within a glove box with an argon atmosphere. In this study, we performed a modeling study of the SDS. A modeling study is practically important because an experimental study requires comparatively more money and time. We estimated the hydrogen atomic ratio in DU hydride by two empirical equations we formulated. Two empirical equations are used to determine Pressure-Composition-Temperature (PCT) curves and the hydrogen atomic ratio in DU hydride. In addition, we present the effect of pressure and temperature in the hydriding and dehydriding. A modeling study of the SDS was performed in this study. It is practically important to save more money and time. The hydrogen atomic ratio in the DU hydride was estimated using two empirical equations. The two empirical equations are modified and reformulated to determine PCT curves and the hydrogen atomic ratio in DU hydride. All parameters that are required to solve two empirical equations are obtained from the experimental data. The derived parameters are utilized for the numerical simulations. In the numerical simulations, the effects of pressure and temperature on both the hydriding and dehydriding reaction rates are confirmed.

Decision making and game play in multiagent settings must often contend with behavioral models of other agents in order to predict their actions. One approach that reduces the complexity of the unconstrained model space is to group models that tend to be behaviorally equivalent. In this paper, we...... seek to further compress the model space by introducing an approximate measure of behavioral equivalence and using it to group models....

In this paper, we propose an optimization-based sparse learning approach to identify the set of most influential reactions in a chemical reaction network. This reduced set of reactions is then employed to construct a reduced chemical reaction mechanism, which is relevant to chemical interaction network modeling. The problem of identifying influential reactions is first formulated as a mixed-integer quadratic program, and then a relaxation method is leveraged to reduce the computational comple...

Both the Jones and Mueller matrices encounter difficulties when physically modeling mixed materials or rough surfaces due to the complexity of light-matter interactions. To address these issues, we derived a matrix called the paths correlation matrix (PCM), which is a probabilistic mixture of Jones matrices of every light propagation path. Because PCM is related to actual light propagation paths, it is well suited for physical modeling. Experiments were performed, and the reflection PCM of a mixture of polypropylene and graphite was measured. The PCM of the mixed sample was accurately decomposed into pure polypropylene's single reflection, pure graphite's single reflection, and depolarization caused by multiple reflections, which is consistent with the theoretical derivation. Reflection parameters of rough surface can be calculated from PCM decomposition, and the results fit well with the theoretical calculations provided by the Fresnel equations. These theoretical and experimental analyses verify that PCM is an efficient way to physically model light-matter interactions.

Successful evacuations are critical to saving lives from future tsunamis. Pedestrian-evacuation modeling related to tsunami hazards primarily has focused on identifying areas and the number of people in these areas where successful evacuations are unlikely. Less attention has been paid to identifying evacuation pathways and population demand at assembly areas for at-risk individuals that may have sufficient time to evacuate. We use the neighboring coastal communities of Hoquiam, Aberdeen, and Cosmopolis (Washington, USA) and the local tsunami threat posed by Cascadia subduction zone earthquakes as a case study to explore the use of geospatial, least-cost-distance evacuation modeling for supporting evacuation outreach, response, and relief planning. We demonstrate an approach that uses geospatial evacuation modeling to (a) map the minimum pedestrian travel speeds to safety, the most efficient paths, and collective evacuation basins, (b) estimate the total number and demographic description of evacuees at predetermined assembly areas, and (c) determine which paths may be compromised due to earthquake-induced ground failure. Results suggest a wide range in the magnitude and type of evacuees at predetermined assembly areas and highlight parts of the communities with no readily accessible assembly area. Earthquake-induced ground failures could obstruct access to some assembly areas, cause evacuees to reroute to get to other assembly areas, and isolate some evacuees from relief personnel. Evacuation-modeling methods and results discussed here have implications and application to tsunami-evacuation outreach, training, response procedures, mitigation, and long-term land use planning to increase community resilience.

The development of processes to produce fullerenes and carbon nanotubes has largely been empirical. Fullerenes were first discovered in the soot produced by laser ablation of graphite [1]and then in the soot of electric arc evaporated carbon. Techniques and conditions for producing larger and larger quantities of fullerenes depended mainly on trial and error empirical variations of these processes, with attempts to scale them up by using larger electrodes and targets and higher power. Various concepts of how fullerenes and carbon nanotubes were formed were put forth, but very little was done based on chemical kinetics of the reactions. This was mainly due to the complex mixture of species and complex nature of conditions in the reactors. Temperatures in the reactors varied from several thousand degrees Kelvin down to near room temperature. There are hundreds of species possible, ranging from atomic carbon to large clusters of carbonaceous soot, and metallic catalyst atoms to metal clusters, to complexes of metals and carbon. Most of the chemical kinetics of the reactions and the thermodynamic properties of clusters and complexes have only been approximated. In addition, flow conditions in the reactors are transient or unsteady, and three dimensional, with steep spatial gradients of temperature and species concentrations. All these factors make computational simulations of reactors very complex and challenging. This article addresses the development of the chemical reaction involved in fullerene production and extends this to production of carbon nanotubes by the laser ablation/oven process and by the electric arc evaporation process. In addition, the high-pressure carbon monoxide (HiPco) process is discussed. The article is in several parts. The first one addresses the thermochemical aspects of modeling; and considers the development of chemical rate equations, estimates of reaction rates, and thermodynamic properties where they are available. The second part

Full Text Available This paper presents a path analytic model showing the cause and effect relationships among various Information Systems (IS planning variables for the banking sector in India. In recent years, there has been an increased awareness among banks of the potential of Information Technology (IT and the use of information systems. Strategic information system planning (SISP becomes an important issue in the use of IS strategically. In India, banks have now started realizing the importance of SISP. In this study, 11 IS planning variables for the banking sector in India are examined and the influence of one over the other is investigated using path analysis. Data for the study are collected from 52 banks operating in India. The results of the study indicate that top management involvement in IS planning greatly influences the whole planning exercise. Moreover, top management involvement is higher when they foresee greater future impact of IS. The study also highlights the need and importance of user training in the banking sector. Change in the focus and orientation of user-training will make the users competent to conceive with innovative IS applications.

Full Text Available Property changes occur in materials subjected to irradiation. The bulk of experimental data and associated empirical models are for isothermal irradiation. The form that these isothermal models take is usually closed form expressions in terms...

The magnetic diagnostics subsystem of the LISA Technology Package (LTP) on board the LISA PathFinder (LPF) spacecraft includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at their respective positions. However, their readouts do not provide a direct measurement of the magnetic field at the positions of the test masses, and hence an interpolation method must be designed and implemented to obtain the values of the magnetic field at these positions. However, such an interpolation process faces serious difficulties. Indeed, the size of the interpolation region is excessive for a linear interpolation to be reliable while, on the other hand, the number of magnetometer channels do not provide sufficient data to go beyond the linear approximation. We describe an alternative method to address this issue, by means of neural network algorithms. The key point in this approach is the ability of neural networks to learn from suitable training data representing the behaviour of the magnetic field. Despite the relatively large distance between the test masses and the magnetometers, and the insufficient number of data channels, we find that our artificial neural network algorithm is able to reduce the estimation errors of the field and gradient down to levels below 10%, a quite satisfactory result. Learning efficiency can be best improved by making use of data obtained in on-ground measurements prior to mission launch in all relevant satellite locations and in real operation conditions. Reliable information on that appears to be essential for a meaningful assessment of magnetic noise in the LTP.

The magnetic diagnostics subsystem of the LISA Technology Package (LTP) on board the LISA PathFinder (LPF) spacecraft includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at their respective positions. However, their readouts do not provide a direct measurement of the magnetic field at the positions of the test masses, and hence an interpolation method must be designed and implemented to obtain the values of the magnetic field at these positions. However, such an interpolation process faces serious difficulties. Indeed, the size of the interpolation region is excessive for a linear interpolation to be reliable while, on the other hand, the number of magnetometer channels do not provide sufficient data to go beyond the linear approximation. We describe an alternative method to address this issue, by means of neural network algorithms. The key point in this approach is the ability of neural networks to learn from suitable training data representing the behaviour of the magnetic field. Despite the relatively large distance between the test masses and the magnetometers, and the insufficient number of data channels, we find that our artificial neural network algorithm is able to reduce the estimation errors of the field and gradient down to levels below 10%, a quite satisfactory result. Learning efficiency can be best improved by making use of data obtained in on-ground measurements prior to mission launch in all relevant satellite locations and in real operation conditions. Reliable information on that appears to be essential for a meaningful assessment of magnetic noise in the LTP.

Full Text Available Implementation of projects of new generation nuclear power plants requires the solving of material science and technological issues in developing of reactor materials. Melts of heavy metals (Pb, Bi and Pb-Bi due to their nuclear and thermophysical properties, are the candidate coolants for fast reactors and accelerator-driven systems (ADS. In this study, α, γ, p, n and 3He induced fission cross section calculations for 209Bi target nucleus at high-energy regions for (α,f, (γ,f, (p,f, (n,f and (3He,f reactions have been investigated using different fission reactionmodels. Mamdouh Table, Sierk, Rotating Liquid Drop and Fission Pathmodels of theoretical fission barriers of TALYS 1.6 code have been used for the fission cross section calculations. The calculated results have been compared with the experimental data taken from the EXFOR database. TALYS 1.6 Sierk model calculations exhibit generally good agreement with the experimental measurements for all reactions used in this study.

The paper summarizes the results of the WASP study conducted for Slovenia. A thorough analysis shows that the model is applicable to the Slovenian power system. Parallel operation with the domestic ELBIVIM model is nevertheless recommended in order to extract the maximum benefits from both models. (author). 4 refs, 5 figs, 4 tabs.

The homogeneous charge compression ignition (HCCI) is one of the most promising engine processes to simultaneously reduce nitrogen oxide and soot emissions. However, its applicability is hindered by its relatively limited operating range. Designer fuels offer unique possibilities for tailoring evaporation and auto-ignition properties, offering a means to control and expand the HCCI operation range. The identification of HCCI relevant fuel properties as well as the definition of a new fuel index able to describe a fuels suitability for HCCI was required in order to develop such designer fuels. This paper discussed a numerical and experimental investigation of a large set of technical fuels covering a wide range of properties. The paper discussed mechanism development approaches, optimization of the lumped mechanism, and and results. Zheng's 7-step reaction mechanism was successfully coupled with a genetic optimization algorithm and fitted to n-heptane ignition delay data. It was concluded that the presented coupled approach could improve the predictive quality of the model and demonstrate that the Zheng model was sufficiently elaborate to emulate the influence of temperature, pressure, exhaust gas recirculation and lambda on ignition. 8 refs., 1 tab., 3 figs.

The homogeneous charge compression ignition (HCCI) is one of the most promising engine processes to simultaneously reduce nitrogen oxide and soot emissions. However, its applicability is hindered by its relatively limited operating range. Designer fuels offer unique possibilities for tailoring evaporation and auto-ignition properties, offering a means to control and expand the HCCI operation range. The identification of HCCI relevant fuel properties as well as the definition of a new fuel index able to describe a fuels suitability for HCCI was required in order to develop such designer fuels. This paper discussed a numerical and experimental investigation of a large set of technical fuels covering a wide range of properties. The paper discussed mechanism development approaches, optimization of the lumped mechanism, and and results. Zheng's 7-step reaction mechanism was successfully coupled with a genetic optimization algorithm and fitted to n-heptane ignition delay data. It was concluded that the presented coupled approach could improve the predictive quality of the model and demonstrate that the Zheng model was sufficiently elaborate to emulate the influence of temperature, pressure, exhaust gas recirculation and lambda on ignition. 8 refs., 1 tab., 3 figs.

The author reviews literature on childrens' reactions to perceived failure and offers "learned helplessness" as a model to explain why a child who makes a mistake gives up. Suggestions for preventing these reactions are given. (Author/JMK)

Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions

Full Text Available In order to model the liquid water transport in the porous materials used in polymer electrolyte membrane (PEM fuel cells, the pore network models are often applied. The presented model is a novel approach to further develop these models towards a percolation model that is based on the fiber structure rather than the pore structure. The developed algorithm determines the stable liquid water paths in the gas diffusion layer (GDL structure and the transitions from the paths to the subsequent paths. The obtained water path network represents the basis for the calculation of the percolation process with low calculation efforts. A good agreement with experimental capillary pressure-saturation curves and synchrotron liquid water visualization data from other literature sources is found. The oxygen diffusivity for the GDL with liquid water saturation at breakthrough reveals that the porosity is not a crucial factor for the limiting current density. An algorithm for condensation is included into the model, which shows that condensing water is redirecting the water path in the GDL, leading to an improved oxygen diffusion by a decreased breakthrough pressure and changed saturation distribution at breakthrough.

This Presentation highlights the following: Overview of LD 2 -water reactions their connections to research reactors with cold sources; some key features and ingredients of vapor explosions in general; Examination of results of 1970 experiment at Grenoble Nuclear Research Center; Thermodynamic evaluations of energetics of explosive LD 2 -D 2 O reactions. This presentation concentrates only on the technical aspects of LD 2 /LH 2 - water reactions; it is not intended to draw/imply safety-related conclusions for research reactors

The use of silane (SiH4) as an effective ignitor and flame stabilizing pilot fuel is well documented. A reliable chemical kinetic mechanism for prediction of its behavior at the conditions encountered in the combustor of a SCRAMJET engine was calculated. The effects of hydrogen addition on hydrocarbon ignition and flame stabilization as a means for reduction of lengthy ignition delays and reaction times were studied. The ranges of applicability of chemical kinetic models of hydrogen-air combustors were also investigated. The CHARNAL computer code was applied to the turbulent reaction rate modeling.

The development of the climate and Earth system models has had a long history, starting with the building of individual atmospheric, ocean, sea ice, land vegetation, biogeochemical, glacial and ecological model components. The early researchers were much aware of the long-term goal of building the Earth system models that would go beyond what is usually included in the climate models by adding interactive biogeochemical interactions. In the early days, the progress was limited by computer capability, as well as by our knowledge of the physical and chemical processes. Over the last few decades, there has been much improved knowledge, better observations for validation and more powerful supercomputer systems that are increasingly meeting the new challenges of comprehensive models. Some of the climate model history will be presented, along with some of the successes and difficulties encountered with present-day supercomputer systems.

Reactions of photoexcited molecules, ions, and radicals in condensed phase environments involve non-adiabatic dynamics over coupled electronic surfaces. We focus on how local environmental symmetries can effect non-adiabatic coupling between excited electronic states and thus influence, in a possibly controllable way, the outcome of photo-excited reactions. Semi-classical and mixed quantum-classical non-adiabatic molecular dynamics methods, together with semi-empirical excited state potentials are used to probe the dynamical mixing of electronic states in different environments from molecular clusters, to simple liquids and solids, and photo-excited reactions in complex reaction environments such as zeolites

A review is presented of models for the depolarization, caused by scattering from raindrops and ice crystals, that limits the performance of dual-polarized satellite communication systems at frequencies above 10 GHz. The physical mechanisms of depolarization as well as theoretical formulations and empirical data are examined. Three theoretical models, the transmission, attenuation-derived, and scaling models, are described and their relative merits are considered.

One of the major challenges in developing predictive models of the surface mediated pollutant formation and fuel combustion is the construction of reliable reaction kinetic mechanisms and models. While the homogeneous, gas-phase chemistry of various light fuels such as hydrogen and methane is relatively well-known large uncertainties exist in the reactionpaths of surface mediated reaction mechanisms for even these very simple species. To date, no detailed kinetic consideration of the surface mechanisms of formation of complex organics such as PCDD/F have been developed. In addition to the complexity of the mechanism, a major difficulty is the lack of reaction kinetic parameters (pre-exponential factor and activation energy) of surface reactions, Consequently, numerical studies of the surface-mediated formation of PCDD/F have often been incorporated only a few reactions. We report the development of a numerical multiple-step surface model based on experimental data of surface mediated (5% CuO/SiO2) conversion of 2-monochlorphenol (2-MCP) to PCDD/F under pyrolytic or oxidative conditions. A reaction kinetic model of the catalytic conversion of 2-MCP on the copper oxide catalyst under pyrolytic conditions was developed based on a detailed multistep surface reaction mechanism developed in our laboratory. The performance of the chemical model is assessed by comparing the numerical predictions with experimental measurements. SURFACE CHEMKIN (version 3.7.1) software was used for modeling. Our results confirm the validity of previously published mechanism of the reaction and provides new insight concerning the formation of PCDD/F formation in combustion processes. This model successfully explains the high yields of PCDD/F at low temperatures that cannot be explained using a purely gas-phase mode.

Reaction systems have been introduced in the 70s to model biochemical systems. Nowadays their range of applications has increased and they are fruitfully used in dierent elds. The concept is simple: some chemical species react, the set of chemical reactions form a graph and a rate function...... is associated with each reaction. Such functions describe the speed of the dierent reactions, or their propensities. Two modelling regimes are then available: the evolution of the dierent species concentrations can be deterministically modelled through a system of ODE, while the counts of the dierent species...... at a certain time are stochastically modelled by means of a continuous-time Markov chain. Our work concerns primarily stochastic reaction systems, and their asymptotic properties. In Paper I, we consider a reaction system with intermediate species, i.e. species that are produced and fast degraded along a path...

Interactive dynamic influence diagrams (I-DIDs) are graphical models for sequential decision making in uncertain settings shared by other agents. Algorithms for solving I-DIDs face the challenge of an exponentially growing space of behavioral models ascribed to other agents over time. Previous ap...

Full Text Available Given the shrinking proportion of agriculture output and the growing mobility of the labor force in China, how agricultural labor productivity develops has become an increasingly attractive topic for researchers and policy makers. This study aims to depict the development trajectory of agricultural labor productivity in China after its WTO entry. Based on a balanced panel data containing 287 Chinese prefectures from 2000 to 2013, this study applies the Latent Growth Curve Model (LGCM and finds that the agricultural labor productivity follows a piecewise growth path with two breaking points in the years of 2004 and 2009. This may stem from some exogenous stimulus, such as supporting policies launched in the breaking years. Further statistical analysis shows an expanding gap of agricultural labor productivity among different Chinese prefectures.

Experiments using gas mixtures of O2, C2H6 or C2H4 and CH4 or He have been carried out with a Li/MgO catalyst using a well-mixed reaction system which show that the total oxidation products, CO and CO2, are formed predominantly from ethylene, formed in the oxidative coupling of methane. It is

Full Text Available A computational model for radio wave propagation through tree orchards is presented. Trees are modeled as collections of branches, geometrically approximated by cylinders, whose dimensions are determined on the basis of measurements in a cherry orchard. Tree canopies are modeled as dielectric spheres of appropriate size. A single row of trees was modeled by creating copies of a representative tree model positioned on top of a rectangular, lossy dielectric slab that simulated the ground. The complete scattering model, including soil and trees, enhanced by periodicity conditions corresponding to the array, was characterized via a commercial computational software tool for simulating the wave propagation by means of the Finite Element Method. The attenuation of the simulated signal was compared to measurements taken in the cherry orchard, using two ZigBee receiver-transmitter modules. Near the top of the tree canopies (at 3 m, the predicted attenuation was close to the measured one—just slightly underestimated. However, at 1.5 m the solver underestimated the measured attenuation significantly, especially when leaves were present and, as distances grew longer. This suggests that the effects of scattering from neighboring tree rows need to be incorporated into the model. However, complex geometries result in ill conditioned linear systems that affect the solver’s convergence.

The reactions occurring in disaccharide-casein reaction mixtures during heating at 120 degreesC and pH 6.8 were studied. The existence of two main degradation routes were established: (1) Isomerisation of the aldose sugars lactose and maltose in their ketose isomers lactulose and maltulose,

The Maillard reaction is an important reaction in food industry. It is responsible for the formation of colour and aroma, as well as toxic compounds as the recent discovered acrylamide. The knowledge of kinetic parameters, such as rate constants and activation energy, is necessary to predict its

For mild steel, after significant plastic deformation in one direction, a subsequent deformation in an orthogonal direction shows a typical stress overshoot compared to monotonic deformation. This phenomenon is investigated experimentally and numerically on a DC06 material. Two models that incorporate the observed overshoot are compared. In the Teodosiu-Hu model, pre-strain influences the rate of kinematic hardening by a rather complex set of evolution equations. The shape of the elastic doma...

We study the Thirring and chiral-invariant Gross-Neveu (CGN) models using the functional integral method. By introducing an auxiliary vector field we disclose a relation with two-dimensional gauge theories coupled to fermions and then extend a technique based on a chiral change in the functional variables to study purely fermionic models. We obtain the exact Klaiber solution for the massless Thirring model (for spin 1/2) in a very simple way and we then extend our technique to investigate the CGN model. We show the factorization of a free fermionic part at the level of Green functions on very general grounds. We then impose certain restrictions on the behavior of the fields - which render our treatment exact only in the zero winding number sector, but allow the computation of the U(1) part of the CGN Green functions exactly, showing, in particular, its complete decoupling from the color part and the almost long-range order behavior in the infrared region. In our approach, the non-triviality of the jacobian arising from the chiral transformation - directly related to the topological density and the axial anomaly - appears to be crucial for the functional integral treatment of these models. (orig.)

A kinetic model of the Boltzmann equation for chemical reactions without energy barrier is considered here with the aim of evaluating the reaction rate and characterizing the transport coefficient of shear viscosity for the reactive system. The Chapman-Enskog solution of the Boltzmann equation is used to compute the chemical reaction effects, in a flow regime for which the reaction process is close to the final equilibrium state. Some numerical results are provided illustrating that the considered chemical reaction without energy barrier can induce an appreciable influence on the reaction rate and on the transport coefficient of shear viscosity.

Technological literacy defines a competitive vision for technology education. Working together with competitive supremacy, technological literacy shapes the actions of technology educators. Rationalised by the dictates of industry, technological literacy was constructed as a product of the marketplace. There are many models that visualise…

Full Text Available In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation, we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc., an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc..

Today, students are expected to access, analyse and synthesise information, and work cooperatively. Their learning environment, therefore, should be equipped with appropriate tools and materials, and teachers should have instructional abilities to use them effectively. This study aims to propose a model to improve teachers' instructional abilities…

Non-thermal ions, especially the suprathermal ones, are known to make a dominant contribution to a number of important physics such as the fusion reactivity in controlled fusion, the ion heat flux, and in the case of a tokamak, the ion bootstrap current. Evaluating the deviation from a local Maxwellian distribution of these non-thermal ions can be a challenging task in the context of a global plasma fluid model that evolves the plasma density, flow, and temperature. Here we describe a hybrid model for coupling such constrained kinetic calculation to global plasma fluid models. The key ingredient is a non-perturbative treatment of the tail ions where the ion Knudsen number approaches or surpasses order unity. This can be sharply constrasted with the standard Chapman-Enskog approach which relies on a perturbative treatment that is frequently invalidated. The accuracy of our coupling scheme is controlled by the precise criteria for matching the non-perturbative kinetic model to perturbative solutions in both configuration space and velocity space. Although our specific application examples will be drawn from laboratory controlled fusion experiments, the general approach is applicable to space and astrophysical plasmas as well. Work supported by DOE.

For mild steel, after significant plastic deformation in one direction, a subsequent deformation in an orthogonal direction shows a typical stress overshoot compared to monotonic deformation. This phenomenon is investigated experimentally and numerically on a DC06 material. Two models that

In the current context of intensified moves towards educational deregulation, the configuration of the Italian middle school and its relationship to education governance is an interesting case. Historically, it represents a unique example of the successful "decision-making" model of the welfarist era. Despite some internal constraints,…

We propose a model reduction method that involves sequential application of clustering of linkage classes and Kron reduction. This approach is specifically useful for chemical reaction networks with each linkage class having less number of reactions. In case of detailed balanced chemical reaction

Full Text Available The availability of both global and regional elevation datasets acquired by modern remote sensing technologies provides an opportunity to significantly improve the accuracy of stream mapping, especially in remote, hard to reach regions. Stream extraction from digital elevation models (DEMs is based on computation of flow accumulation, a summary parameter that poses performance and accuracy challenges when applied to large, noisy DEMs generated by remote sensing technologies. Robust handling of DEM depressions is essential for reliable extraction of connected drainage networks from this type of data. The least-cost flow routing method implemented in GRASS GIS as the module r.watershed was redesigned to significantly improve its speed, functionality, and memory requirements and make it an efficient tool for stream mapping and watershed analysis from large DEMs. To evaluate its handling of large depressions, typical for remote sensing derived DEMs, three different methods were compared: traditional sink filling, impact reduction approach, and least-cost path search. The comparison was performed using the Shuttle Radar Topographic Mission (SRTM and Interferometric Synthetic Aperture Radar for Elevation (IFSARE datasets covering central Panama at 90 m and 10 m resolutions, respectively. The accuracy assessment was based on ground control points acquired by GPS and reference points digitized from Landsat imagery along segments of selected Panamanian rivers. The results demonstrate that the new implementation of the least-cost path method is significantly faster than the original version, can cope with massive datasets, and provides the most accurate results in terms of stream locations validated against reference points.

We present a method to map the full equilibrium distribution of the primitive-path (PP) length, obtained from multi-chain simulations of polymer melts, onto a single-chain mean-field ‘target’ model. Most previous works used the Doi–Edwards tube model as a target. However, the average number of monomers per PP segment, obtained from multi-chain PP networks, has consistently shown a discrepancy of a factor of two with respect to tube-model estimates. Part of the problem is that the tube model neglects fluctuations in the lengths of PP segments, the number of entanglements per chain and the distribution of monomers among PP segments, while all these fluctuations are observed in multi-chain simulations. Here we use a recently proposed slip-link model, which includes fluctuations in all these variables as well as in the spatial positions of the entanglements. This turns out to be essential to obtain qualitative and quantitative agreement with the equilibrium PP-length distribution obtained from multi-chain simulations. By fitting this distribution, we are able to determine two of the three parameters of the model, which govern its equilibrium properties. This mapping is executed for four different linear polymers and for different molecular weights. The two parameters are found to depend on chemistry, but not on molecular weight. The model predicts a constant plateau modulus minus a correction inversely proportional to molecular weight. The value for well-entangled chains, with the parameters determined ab initio, lies in the range of experimental data for the materials investigated. (paper)

National Aeronautics and Space Administration — Reaction wheel disturbances are some of the largest sources of noise on sensitive telescopes. Such wheel-induced mechanical noises are not well characterized....

National Aeronautics and Space Administration — Reaction wheel mechanical noise is one of the largest sources of disturbance forcing on space-based observatories. Such noise arises from mass imbalance, bearing...

Alzheimer disease (AD) is a medically and financially overwhelming condition, and incidence rates are expected to triple by 2050.Despite decades of research in animal models of AD, the disease remains incompletely understood, with few treatment options. This review summarizes historical and current AD research efforts, with emphasis on the disparity between preclinical animal studies and the reality of human disease and how this has impacted clinical trials. Ultimately, we provide a mechanism for shifting the focus of AD research away from animal models to focus primarily on human biology as a means to improve the applicability of research findings to human disease. Implementation of these alternatives may hasten development of improved strategies to prevent, detect, ameliorate, and possibly cure this devastating disease.

that with an oil price at 100 $/barrel, a CO2 price at40 €/ton and the assumed penetration of hydrogen in the transport sector, it is economically optimal to cover more than 95% of the primary energy consumption for electricity and district heat by renewables in 2050. When the transport sector is converted......: A model for analyses of the electricity and CHP markets in the Baltic Sea Region. 〈www.Balmorel.com〉; 2001. [1

The purpose of this study was to explore how and under what conditions two different leadership roles are able to facilitate an organizational climate that supports creativity. The study was conducted in a leading professional service firm. The introduced hypotheses were tested by means of a structural equation model. Findings indicate that the leadership roles are conceptually different and that organizational structure is important for leaders’ ability to create a climate ...

, this paper evaluates plant-wide modelling of precipitation reactions using a generic approach integrated within activated sludge and anaerobic models. Preliminary results of anaerobic digester sludge in batch system suggest that the model is able to simulate the dynamics of precipitation reactions. Kinetic...

A thermokinetic model coupling finite-element heat transfer with transformation kinetics is developed to determine the effect of deposition patterns on the phase-transformation kinetics of laser powder deposition (LPD) process of a hot-work tool steel. The finite-element model is used to define the temperature history of the process used in an empirical-based kinetic model to analyze the tempering effect of the heating and cooling cycles of the deposition process. An area is defined to be covered by AISI H13 on a substrate of AISI 1018 with three different deposition patterns: one section, two section, and three section. The two-section pattern divides the area of the one-section pattern into two sections, and the three-section pattern divides that area into three sections. The results show that dividing the area under deposition into smaller areas can influence the phase transformation kinetics of the process and, consequently, change the final hardness of the deposited material. The two-section pattern shows a higher average hardness than the one-section pattern, and the three-section pattern shows a fully hardened surface without significant tempered zones of low hardness. To verify the results, a microhardness test and scanning electron microscope were used.

It is often assumed that in the historical transformation to modern industrial society, the integration of women into the economy occurred everywhere as a three-phase process: in pre-modern societies, the extensive integration of women into societal production; then, their wide exclusion with the shift to industrial society; and finally, their re-integration into paid work during the further course of modernization. Results from the author's own international comparative study of the historical development of the family and the economic integration of women have shown that this was decidedly not the case even for western Europe. Hence the question arises: why is there such historical variation in the development and importance of the housewife model of the male breadwinner family? In the article, an explanation is presented. It is argued that the historical development of the urban bourgeoisie was especially significant for the historical destiny of this cultural model: the social and political strength of the urban bourgeoisie had central societal importance in the imposition of the housewife model of the male breadwinner family as the dominant family form in a given society. In this, it is necessary to distinguish between the imposition of the breadwinner marriage at the cultural level on the one hand, and at the level of social practice in the family on the other.

In this work, we develop a polycrystal mean-field constitutive model based on an elastic–plastic self-consistent (EPSC) framework. In this model, we incorporate recently developed subgrain models for dislocation density evolution with thermally activated slip, twin activation via statistical stress fluctuations, reoriented twin domains within the grain and associated stress relaxation, twin boundary hardening, and de-twinning. The model is applied to a systematic set of strain path change tests on pure beryllium (Be). Under the applied deformation conditions, Be deforms by multiple slip modes and deformation twinning and thereby provides a challenging test for model validation. With a single set of material parameters, determined using the flow-stress vs. strain responses during monotonic testing, the model predicts well the evolution of texture, lattice strains, and twinning. With further analysis, we demonstrate the significant influence of internal residual stresses on (1) the flow stress drop when reloading from one path to another, (2) deformation twin activation, (3) de-twinning during a reversal strain path change, and (4) the formation of additional twin variants during a cross-loading sequence. The model presented here can, in principle, be applied to other metals, deforming by multiple slip and twinning modes under a wide range of temperature, strain rate, and strain path conditions

When a chemical system is submitted to high energy sources (UV, ionizing radiation, plasma sparks, etc.), as is expected to be the case of prebiotic chemistry studies, a plethora of reactive intermediates could form. If oxygen is present in excess, carbon dioxide and water are the major products. More interesting is the case of reducing conditions where synthetic pathways are also possible. This article examines the theoretical modeling of such systems with random-generated chemical networks. Four types of random-generated chemical networks were considered that originated from a combination of two connection topologies (viz., Poisson and scale-free) with reversible and irreversible chemical reactions. The results were analyzed taking into account the number of the most abundant products required for reaching 50% of the total number of moles of compounds at equilibrium, as this may be related to an actual problem of complex mixture analysis. The model accounts for multi-component reaction systems with no a priori knowledge of reacting species and the intermediates involved if system components are sufficiently interconnected. The approach taken is relevant to an earlier study on reactions that may have occurred in prebiotic systems where only a few compounds were detected. A validation of the model was attained on the basis of results of UVC and radiolytic reactions of prebiotic mixtures of low molecular weight compounds likely present on the primeval Earth.

Full Text Available The purpose of our study was to study the prevalence of exercise dependence (EXD among college students and to investigate the role of EXD and gender on exercise behavior and eating disorders. Excessive exercise can become an addiction known as exercise dependence. In our population of 517 college students, 3.3% were at risk for EXD and 8% were at risk for an eating disorder. We used Path analysis the simplest case of Structural Equation Modeling (SEM to investigate the role of EXD and exercise behavior on eating disorders. We observed a small direct effect from gender to eating disorders. In females we observed significant direct effect between exercise behavior (r = −0.17, p = 0.009 and EXD (r = 0.34, p < 0.001 on eating pathology. We also observed an indirect effect of exercise behavior on eating pathology (r = 0.16 through EXD (r = 0.48, r2 = 0.23, p < 0.001. In females the total variance of eating pathology explained by the SEM model was 9%. In males we observed a direct effect between EXD (r = 0.23, p < 0.001 on eating pathology. We also observed indirect effect of exercise behavior on eating pathology (r = 0.11 through EXD (r = 0.49, r2 = 0.24, p < 0.001. In males the total variance of eating pathology explained by the SEM model was 5%.

Chemical models of aqueous geochemical systems are usually built on the concept of thermodynamic equilibrium. Though many elementary reactions in a geochemical system may be close to equilibrium, others may not be. Chemical models of aqueous fluids should take into account that many aqueous redox reactions are among the latter. The behavior of redox reactions may critically affect migration of certain radionuclides, especially the actinides. In addition, the progress of reaction in geochemical systems requires thermodynamic driving forces associated with elementary reactions not at equilibrium, which are termed irreversible reactions. Both static chemical models of fluids and dynamic models of reacting systems have been applied to a wide spectrum of problems in water-rock interactions. Potential applications in nuclear waste disposal range from problems in geochemical aspects of site evaluation to those of waste-water-rock interactions. However, much further work in the laboratory and the field will be required to develop and verify such applications of chemical modeling

The experiment was reduction of cadmium rate with electrochemical influenced by time process, concentration, current strength and type of electrode plate. The aim of the experiment was to know the influence, mathematic model reduction of cadmium the reaction rate, reaction rate constant and reaction orde influenced by time process, concentration, current strength and type of electrode plate. Result of research indicate the time processing if using plate of copper electrode is during 30 minutes and using plate of aluminium electrode is during 20 minutes. Condition of strong current that used in process of electrochemical is only 0.8 ampere and concentration effective is 5.23 mg/l. The most effective type Al of electrode plate for reduction from waste and the efficiency of reduction is 98 %. (author)

Full Text Available The technology acceptance model (TAM has been well-known for decades. However, the global adoption of the Internet creates new interests in utilizing TAM in e-commerce and the post-consumption intention, especially in emerging markets. Data was collected from 758 online customers via a web-based survey in Vietnam. Particular contribution of the results is that perceived usefulness, perceived ease of use, fairness, trust and the quality of the customer interface have direct or indirect impacts on customer satisfaction and customer loyalty. Moreover, in emerging markets, trust was outlined as the strongest factor contributing to customer satisfaction and leading to customer loyalty.

The article considers the problem of automation of the formation of large complex parts, products and structures, especially for unique or small-batch objects produced by a method of additive technology [1]. Results of scientific research in search for the optimal design of a robotic complex, its modes of operation (work), structure of its control helped to impose the technical requirements on the technological process for manufacturing and design installation of the robotic complex. Research on virtual models of the robotic complexes allowed defining the main directions of design improvements and the main goal (purpose) of testing of the the manufactured prototype: checking the positioning accuracy of the working part.

The study introduces a model in which attachment patterns serve as predictors, empathy and fear of death as mediators, and ageism as the predicted variable. Data were collected from young adults (N = 440). Anxious attachment was directly and positively correlated with ageism, and also indirectly and positively by the mediator "fear of death." Avoidant attachment was indirectly and negatively correlated with ageism by the mediator "empathy". It is suggested that interventions for reducing ageist attitudes among younger adults would focus on existential fears, as well as on empathic ability, according to the attachment tendencies of these individuals.

The U.S. Geological Survey and its partners have collaborated to provide an innovative, advanced 3 dimensional hydrogeologic framework which was used in a groundwater model designed to test water management scenarios. Principal aquifers for the area mostly consist of Quaternary alluvium and Tertiary-age fluvial sediments which are heavily used for irrigation, municipal and environmental uses. This strategy used airborne electromagnetic (AEM) surveys, validated through sensitivity analysis of geophysical and geological ground truth to provide new geologic interpretation to characterize the hydrogeologic framework in the area. The base of aquifer created through this work leads to new interpretations of saturated thickness and groundwater connectivity to the surface water system. The current version of the groundwater model which uses the advanced hydrogeologic framework shows a distinct change in flow path orientation, timing and amount of base flow to the streams of the area. Ongoing efforts for development of the hydrogeologic framework development include subdivision of the aquifers into new hydrostratigraphic units based on analysis of geophysical and lithologic characteristics which will be incorporated into future groundwater models. The hydrostratigraphic units are further enhanced by Nuclear Magnetic Resonance (NMR) measurements to characterize aquifers. NMR measures the free water in the aquifer in situ allowing for a determination of hydraulic conductivity. NMR hydraulic conductivity values will be mapped to the hydrostratigraphic units, which in turn are incorporated into the latest versions of the groundwater model. The addition of innovative, advanced 3 dimensional hydrogeologic frameworks, which incorporates AEM and NMR, for groundwater modeling, has a definite advantage over traditional frameworks. These groundwater models represent the natural system at a level of reality not achievable by other methods, which lead to greater confidence in the

The NUCTRAN model has been applied to the Swedish KBS-3 nuclear waste repository concept, where the migration of radionuclides is through various barriers and pathways. The escape of the nuclides from the canister occurs through a small hole. This hole controls the release of nuclides from the repository. NUCTRAN is a useful tool to calculate the nonstationary transport in a repository for high-level nuclear waste. The advantage of this model is the use of a coarse compartmentalization of the repository, which makes it flexible and easy to adapt to different geometries. The several radionuclide release calculations made with NUCTRAN have shown the capability of this to handle different situations rapidly and easily. The particularity of these calculations is the high accuracy obtained by using a coarse compartmentalization of the Swedish KBS-3 repository and the small requirements of computing time. At short times for short-lived nuclides, the calculated releases are exaggerated. The error can be considerably reduced by an additional subdivision of large compartments into a few compartments

The authors have developed the GNASH code to include photonuclear reactions for incident energies up to 140 MeV. Photoabsorption is modeled through the giant resonance at the lower energies, and the quasideuteron mechanism at the higher energies, and the angular momentum coupling of the incident photon to the target is properly accounted for. After the initial interaction, primary and multiple preequilibrium emission of fast particles can occur before compound nucleus decay from the equilibrated compound nucleus. The angular distributions from compound nucleus decay are taken as isotropic, and those from preequilibrium emission (which they obtain from a phase-space model which conserves momentum) are forward-peaked. To test the new modeling they apply the code to calculate photonuclear reactions on 208 Pb for incident energies up to 140 MeV

Since early November 2010 a deadly cholera epidemic has been spreading across the Caribbean nation of Haiti, killing thousands of people and infecting hundreds of thousands. While infection rates are being actively monitored, health organizations have been left without a clear understanding of exactly how the disease has spread across Haiti. Cholera can spread through exposure to contaminated water, and the disease travels over long distances if an infected individual moves around the country. Using representations of these two predominant dispersion mechanisms, along with information on the size of the susceptible population, the number of infected individuals, and the aquatic concentration of the cholera-causing bacteria for more than 500 communities, Bertuzzo et al. designed a model that was able to accurately reproduce the progression of the Haitian cholera epidemic. (Geophysical Research Letters, doi:10.1029/2011GL046823, 2011)

This paper introduces a recursive particle filtering algorithm designed to filter high dimensional systems with complicated non-linear and non-Gaussian effects. The method incorporates a parallel marginalization (PMMC) step in conjunction with the hybrid Monte Carlo (HMC) scheme to improve samples generated by standard particle filters. Parallel marginalization is an efficient Markov chain Monte Carlo (MCMC) strategy that uses lower dimensional approximate marginal distributions of the target distribution to accelerate equilibration. As a validation the algorithm is tested on a 2516 dimensional, bimodal, stochastic model motivated by the Kuroshio current that runs along the Japanese coast. The results of this test indicate that the method is an attractive alternative for problems that require the generality of a particle filter but have been inaccessible due to the limitations of standard particle filtering strategies.

Heat assisted magnetic recording (HAMR) is a promising approach for increasing the storage density of hard disk drives. To increase data density, information must be written in small grains, which requires materials with high anisotropy energy such as L10 FePt. On the other hand, high anisotropy implies high coercivity, making it difficult to write the data with existing recording heads. This issue can be overcome by the technique of HAMR, where a laser is used to heat the recording medium to reduce its coercivity while retaining good thermal stability at room temperature due to the large anisotropy energy. One of the keys to the success of HAMR is the precise control of writing process. In this talk, I will propose a Monte Carlo simulation, based on an atomistic model, that would allow us to study the magnetic properties of L10 FePt and dynamics of spin reversal for the writing process in HAMR.

To acquire a high amount of information of the behaviour of the Homogeneous Charge Compression Ignition (HCCI) auto-ignition process, a reduced surrogate mechanism has been composed out of reduced n-heptane, iso-octane and toluene mechanisms, containing 62 reactions and 49 species. This mechanism has been validated numerically in a 0D HCCI engine code against more detailed mechanisms (inlet temperature varying from 290 to 500 K, the equivalence ratio from 0.2 to 0.7 and the compression ratio from 8 to 18) and experimentally against experimental shock tube and rapid compression machine data from the literature at pressures between 9 and 55 bar and temperatures between 700 and 1400 K for several fuels: the pure compounds n-heptane, iso-octane and toluene as well as binary and ternary mixtures of these compounds. For this validation, stoichiometric mixtures and mixtures with an equivalence ratio of 0.5 are used. The experimental validation is extended by comparing the surrogate mechanism to experimental data from an HCCI engine. A global reaction pathway is proposed for the auto-ignition of a surrogate gasoline, using the surrogate mechanism, in order to show the interactions that the three compounds can have with one another during the auto-ignition of a ternary mixture. (author)

The objective of this study was to evaluate the delivery of inhaled pharmaceutical aerosols using an enhanced condensational growth (ECG) approach in an airway model extending from the oral cavity to the end of the tracheobronchial (TB) region. The geometry consisted of an elliptical mouth-throat (MT) model, the upper TB airways extending to bifurcation B3, and a subsequent individual pathmodel entering the right lower lobe of the lung. Submicrometer monodisperse aerosols with diameters of 560 and 900 nm were delivered to the mouth inlet under control (25 °C with subsaturated air) or ECG (39 or 42 °C with saturated air) conditions. Flow fields and droplet characteristics were simulated using a computational fluid dynamics model that was previously demonstrated to accurately predict aerosol size growth and deposition. Results indicated that both the control and ECG delivery cases produced very little deposition in the MT and upper TB model (approximately 1%). Under ECG delivery conditions, large size increases of the aerosol droplets were observed resulting in mass median aerodynamic diameters of 2.4-3.3 μm exiting B5. This increase in aerosol size produced an order of magnitude increase in aerosol deposition within the TB airways compared with the controls, with TB deposition efficiencies of approximately 32-46% for ECG conditions. Estimates of downstream pulmonary deposition indicted near full lung retention of the aerosol during ECG delivery. Furthermore, targeting the region of TB deposition by controlling the inlet temperature conditions and initial aerosol size also appeared possible.

The purpose of our study was to study the prevalence of exercise dependence (EXD) among college students and to investigate the role of EXD and gender on exercise behavior and eating disorders. Excessive exercise can become an addiction known as exercise dependence. In our population of 517 college students, 3.3% were at risk for EXD and 8% were at risk for an eating disorder. We used Path analysis the simplest case of Structural Equation Modeling (SEM) to investigate the role of EXD and exercise behavior on eating disorders. We observed a small direct effect from gender to eating disorders. In females we observed significant direct effect between exercise behavior (r = -0.17, p = 0.009) and EXD (r = 0.34, p exercise behavior on eating pathology (r = 0.16) through EXD (r = 0.48, r2 = 0.23, p exercise behavior on eating pathology (r = 0.11) through EXD (r = 0.49, r2 = 0.24, p < 0.001). In males the total variance of eating pathology explained by the SEM model was 5%.

We describe a coherent strategy and set of tools for reconstructing the fundamental theory of the TeV scale from LHC data. We show that On-Shell Effective Theories (OSETs) effectively characterize hadron collider data in terms of masses, production cross sections, and decay modes of candidate new particles. An OSET description of the data strongly constrains the underlying new physics, and sharply motivates the construction of its Lagrangian. Simulating OSETs allows efficient analysis of new-physics signals, especially when they arise from complicated production and decay topologies. To this end, we present MARMOSET, a Monte Carlo tool for simulating the OSET version of essentially any new-physics model. MARMOSET enables rapid testing of theoretical hypotheses suggested by both data and model-building intuition, which together chart a path to the underlying theory. We illustrate this process by working through a number of data challenges, where the most important features of TeV-scale physics are reconstructed with as little as 5 fb -1 of simulated LHC signals

Aging perception plays a central role in the experience of healthy aging by older people. Research identified that factors such as hope, life satisfaction, and socioeconomic status influence the perception of aging in older populations. This study sought to test a hypothetical model to quantitatively evaluate the relationship between hope, life satisfaction, and socioeconomic status with aging perception. A cross-sectional design was used with 504 older aged participants who live in Qazvin, Iran. Data were collected using the Barker's Aging Perception Questionnaire, Life Satisfaction Index-Z, and Herth Hope Index. The results of path analysis showed that hope was the most important factor affecting aging perception. Results drawn from correlation analysis indicated that there was a positive significant correlation ( r = .383, p hope and aging perception. Further analysis found that hope had the strongest impact on aging perception compared with the other variables analyzed (e.g., life satisfaction and socioeconomic status). A model of aging perception in Iranian elders is presented. The findings suggested that hope had a significant and positive impact on aging perception. Implications for clinical practice and research are discussed.

We describe a coherent strategy and set of tools for reconstructing the fundamental theory of the TeV scale from LHC data. We show that On-Shell Effective Theories (OSETs) effectively characterize hadron collider data in terms of masses, production cross sections, and decay modes of candidate new particles. An OSET description of the data strongly constrains the underlying new physics, and sharply motivates the construction of its Lagrangian. Simulating OSETs allows efficient analysis of new-physics signals, especially when they arise from complicated production and decay topologies. To this end, we present MARMOSET, a Monte Carlo tool for simulating the OSET version of essentially any new-physics model. MARMOSET enables rapid testing of theoretical hypotheses suggested by both data and model-building intuition, which together chart a path to the underlying theory. We illustrate this process by working through a number of data challenges, where the most important features of TeV-scale physics are reconstructed with as little as 5 fb{sup -1} of simulated LHC signals.

The aim of the study is to assess a mediational pathway, which includes patients' sex, personality traits, age of onset of gambling disorder (GD) and gambling-related variables. The South Oaks Gambling Screen, the Symptom Checklist (SCL-90-R) and the Temperament and Character Inventory-R were administered to a large sample of 1632 outpatients attending a specialized outpatient GD unit. Sociodemographic variables were also recorded. A Structural Equation Model was adjusted to assess the pathway. Age of onset mediated between personality profile (novelty seeking and self-transcendence) and GD severity and depression symptoms (measured by SCL-90-R). Sex had a direct effect on GD onset and depression symptoms: men initiated the GD earlier and reported fewer depression symptoms. Age of onset is a mediating variable between sex, personality traits, GD severity and depression symptoms. These empirical results provide new evidence about the underlying etiological process of dysfunctional behaviors related to gambling, and may help to guide the development of more effective treatment and prevention programs aimed at high-risk groups such as young men with high levels of novelty seeking and self-transcendence.

Previous efforts to target the mouse genome for the addition, subtraction, or substitution of biologically informative sequences required complex vector design and a series of arduous steps only a handful of labs could master. The facile and inexpensive clustered regularly interspaced short palindromic repeats (CRISPR) method has now superseded traditional means of genome modification such that virtually any lab can quickly assemble reagents for developing new mouse models for cardiovascular research. Here we briefly review the history of CRISPR in prokaryotes, highlighting major discoveries leading to its formulation for genome modification in the animal kingdom. Core components of CRISPR technology are reviewed and updated. Practical pointers for two-component and three-component CRISPR editing are summarized with a number of applications in mice including frameshift mutations, deletion of enhancers and non-coding genes, nucleotide substitution of protein-coding and gene regulatory sequences, incorporation of loxP sites for conditional gene inactivation, and epitope tag integration. Genotyping strategies are presented and topics of genetic mosaicism and inadvertent targeting discussed. Finally, clinical applications and ethical considerations are addressed as the biomedical community eagerly embraces this astonishing innovation in genome editing to tackle previously intractable questions. PMID:27102963

A new automatic flight control system concept suitable for aircraft with highly nonlinear aerodynamic and propulsion characteristics and which must operate over a wide flight envelope was investigated. This exact model follower inverts a complete nonlinear model of the aircraft as part of the feed-forward path. The inversion is accomplished by a Newton-Raphson trim of the model at each digital computer cycle time of 0.05 seconds. The combination of the inverse model and the actual aircraft in the feed-forward path alloys the translational and rotational regulators in the feedback path to be easily designed by linear methods. An explanation of the model inversion procedure is presented. An extensive set of simulation data for essentially the full flight envelope for a vertical attitude takeoff and landing aircraft (VATOL) is presented. These data demonstrate the successful, smooth, and precise control that can be achieved with this concept. The trajectory includes conventional flight from 200 to 900 ft/sec with path accelerations and decelerations, altitude changes of over 6000 ft and 2g and 3g turns. Vertical attitude maneuvering as a tail sitter along all axes is demonstrated. A transition trajectory from 200 ft/sec in conventional flight to stationary hover in the vertical attitude includes satisfactory operation through lift-cure slope reversal as attitude goes from horizontal to vertical at constant altitude. A vertical attitude takeoff from stationary hover to conventional flight is also demonstrated.

In studies of light-ion induced nuclear reactions one distinguishes three different mechanisms: direct, compound and pre-equilibrium nuclear reactions. These reaction processes can be subdivided according to time scales or, equivalently, the number of intranuclear collisions taking place before emission. Furthermore, each mechanism preferably excites certain parts of the nuclear level spectrum and is characterized by different types of angular distributions. This presentation includes description of the classical, exciton model, semi-classical models, with some selected results, and quantum mechanical models. A survey of classical versus quantum-mechanical pre-equilibrium reaction theory is presented including practical applications

Full Text Available This paper presents an experimental study of the self-initiation reaction of n-butyl acrylate (n-BA in free-radical polymerization. For the first time, the frequency factor and activation energy of the monomer self-initiation reaction are estimated from measurements of n-BA conversion in free-radical homo-polymerization initiated only by the monomer. The estimation was carried out using a macroscopic mechanistic mathematical model of the reactor. In addition to already-known reactions that contribute to the polymerization, the model considers a n-BA self-initiation reaction mechanism that is based on our previous electronic-level first-principles theoretical study of the self-initiation reaction. Reaction rate equations are derived using the method of moments. The reaction-rate parameter estimates obtained from conversion measurements agree well with estimates obtained via our purely-theoretical quantum chemical calculations.

For a better understanding of the performance of slag in concrete, evaluating the feasibility of using one certain type of slag and possible improvement of its use in practice, fundamental knowledge about its reaction and interaction with other constituents is important. While the researches on

Decision-making in healthcare is complex. Research on coverage decision-making has focused on comparative studies for several countries, statistical analyses for single decision-makers, the decision outcome and appraisal criteria. Accounting for decision processes extends the complexity, as they are multidimensional and process elements need to be regarded as latent constructs (composites) that are not observed directly. The objective of this study was to present a practical application of partial least square pathmodelling (PLS-PM) to evaluate how it offers a method for empirical analysis of decision-making in healthcare. Empirical approaches that applied PLS-PM to decision-making in healthcare were identified through a systematic literature search. PLS-PM was used as an estimation technique for a structural equation model that specified hypotheses between the components of decision processes and the reasonableness of decision-making in terms of medical, economic and other ethical criteria. The model was estimated for a sample of 55 coverage decisions on the extension of newborn screening programmes in Europe. Results were evaluated by standard reliability and validity measures for PLS-PM. After modification by dropping two indicators that showed poor measures in the measurement models' quality assessment and were not meaningful for newborn screening, the structural equation model estimation produced plausible results. The presence of three influences was supported: the links between both stakeholder participation or transparency and the reasonableness of decision-making; and the effect of transparency on the degree of scientific rigour of assessment. Reliable and valid measurement models were obtained to describe the composites of 'transparency', 'participation', 'scientific rigour' and 'reasonableness'. The structural equation model was among the first applications of PLS-PM to coverage decision-making. It allowed testing of hypotheses in situations where there

The relation between reading for pleasure, night-sky watching interest, and openness to experience were examined in a sample of 129 college students. Results of a path analysis examining a mediation model indicated that the influence of night-sky interest on reading for pleasure was not mediated by the broad personality domain openness to…

The main objective of this study was to evaluate the health action process approach (HAPA) as a motivational model for dietary self-management for people with multiple sclerosis (MS). Quantitative descriptive research design using path analysis was used. Participants were 209 individuals with MS recruited from the National MS Society and a…

This work presents a very accurate experimental method based on radioactive beams for the study of the spectroscopical properties of unbound states. It makes use of inverse kinematical elastic scattering of the ions of an radioactive beam from a target of stable nuclei. An application of the method for the study of radioactive nuclei of astrophysical interests is given, namely of {sup 19}Ne and {sup 16}F nuclei. It is shown that on the basis of the properties of proton-emitting unbound levels of {sup 19}Ne one can develop a method of experimental study of nova explosions. It is based on observation of gamma emissions following the gamma decays of the radionuclides generated in the explosion. The most interesting radioactive nucleus involved in this process is {sup 18}F the yield of which depends strongly on the rate of {sup 18}F(p,{alpha}){sup 15}O reaction. This yield depends in turn of the properties of the states of the ({sup 18}F + p) compound nucleus, i.e. the {sup 19}Ne nucleus. In addition it was studied the unbound {sup 16}F nucleus also of astrophysical significance in {sup 15}O rich environment. Since {sup 16}F is an unbound nucleus the reaction of {sup 15}O with protons, although abundant in most astrophysical media, appears to be negligible. Thus the question that was posed was whether the exotic {sup 15}O(p,{beta}{sup +}){sup 16}O resonant reaction acquires some importance in various astrophysical media. In this work one describes a novel approach to study the reaction mechanisms which could change drastically the role of non-bound nuclei in stellar processes. One implies this mechanism to the processes (p,{gamma})({beta}){sup +} and (p,{gamma}) (p,{gamma}) within {sup 15}O rich media. The experimental studies of the {sup 19}Ne and {sup 16}F were carried out with a radioactive beam of {sup 15}O ions of very low energy produced by SPIRAL at GANIL. To improve the energy resolution thin targets were used with a 0 angle of observation relative to the beam

A remarkably strong nonlinear behavior of the atmospheric circulation response to North Atlantic SST anomalies (SSTA) is revealed from a set of large-ensemble, high-resolution, and hemispheric-scale Weather Research and Forecasting (WRF) model simulations. The model is forced with the SSTA associated with meridional shift of the Gulf Stream (GS) path, constructed from a lag regression of the winter SST on a GS Index from observation. Analysis of the systematic set of experiments with SSTAs of varied amplitudes and switched signs representing various GS-shift scenarios provides unique insights into mechanism for emergence and evolution of transient and equilibrium response of atmospheric circulation to extratropical SSTA. Results show that, independent of sign of the SSTA, the equilibrium response is characterized by an anomalous trough over the North Atlantic Ocean and the Western Europe concurrent with enhanced storm track, increased rainfall, and reduced blocking days. To the north of the anomalous low, an anomalous ridge emerges over the Greenland, Iceland, and Norwegian Seas accompanied by weakened storm track, reduced rainfall and increased blocking days. This nonlinear component of the total response dominates the weak and oppositely signed linear response that is directly forced by the SSTA, yielding an anomalous ridge (trough) downstream of the warm (cold) SSTA. The amplitude of the linear response is proportional to that of the SSTA, but this is masked by the overwhelmingly strong nonlinear behavior showing no clear correspondence to the SSTA amplitude. The nonlinear pattern emerges 3-4 weeks after the model initialization in November and reaches its first peak amplitude in December/January. It appears that altered baroclinic wave activity due to the GS SSTA in November lead to low-frequency height responses in December/January through transient eddy vorticity flux convergence.

Full Text Available This paper presents a forecast and analysis of population, economic development, energy consumption and CO2 emissions variation in China in the short- and long-term steps before 2020 with 2007 as the base year. The widely applied IPAT model, which is the basis for calculations, projections, and scenarios of greenhouse gases (GHGs reformulated as the Kaya equation, is extended to analyze and predict the relations between human activities and the environment. Four scenarios of CO2 emissions are used including business as usual (BAU, energy efficiency improvement scenario (EEI, low carbon scenario (LC and enhanced low carbon scenario (ELC. The results show that carbon intensity will be reduced by 40-45% as scheduled and economic growth rate will be 6% in China under LC scenario by 2020. The LC scenario, as the most appropriate and the most feasible scheme for China's low-carbon development in the future, can maximize the harmonious development of economy, society, energy and environmental systems. Assuming China's development follows the LC scenario, the paper further gives four paths of low-carbon transformation in China: technological innovation, industrial structure optimization, energy structure optimization and policy guidance.

Full Text Available Abstract Background Decision-making in healthcare is complex. Research on coverage decision-making has focused on comparative studies for several countries, statistical analyses for single decision-makers, the decision outcome and appraisal criteria. Accounting for decision processes extends the complexity, as they are multidimensional and process elements need to be regarded as latent constructs (composites that are not observed directly. The objective of this study was to present a practical application of partial least square pathmodelling (PLS-PM to evaluate how it offers a method for empirical analysis of decision-making in healthcare. Methods Empirical approaches that applied PLS-PM to decision-making in healthcare were identified through a systematic literature search. PLS-PM was used as an estimation technique for a structural equation model that specified hypotheses between the components of decision processes and the reasonableness of decision-making in terms of medical, economic and other ethical criteria. The model was estimated for a sample of 55 coverage decisions on the extension of newborn screening programmes in Europe. Results were evaluated by standard reliability and validity measures for PLS-PM. Results After modification by dropping two indicators that showed poor measures in the measurement models’ quality assessment and were not meaningful for newborn screening, the structural equation model estimation produced plausible results. The presence of three influences was supported: the links between both stakeholder participation or transparency and the reasonableness of decision-making; and the effect of transparency on the degree of scientific rigour of assessment. Reliable and valid measurement models were obtained to describe the composites of ‘transparency’, ‘participation’, ‘scientific rigour’ and ‘reasonableness’. Conclusions The structural equation model was among the first applications of PLS-PM to

A comparison among different ion fusion models is presented. In particular, the multistep aspects of the recently proposed Dinucleus Doorway Model are made explicit and the model is confronted with other compound nucleus limitation models. It is suggested that the latter models provide effective one-step descriptions of heavy ion fusion.

A mechanism by which competitive binary and ternary ion-molecule reactions can occur is proposed. Calculations are undertaken for the specific system CH3(+) + NH3 + He which has been studied in the laboratory by Smith and Adams (1978), and the potential surface of which has been studied theoretically by Nobes and Radom (1983). It is shown that a potential-energy barrier in the exit channel prevents the rapid dissociation of collision complexes with large amounts of angular momentum and thereby allows collisional stabilization of the complexes. The calculated ternary-reaction rate coefficient is in good agreement with the experimental value, but a plot of the effective two-body rate coefficient of the ternary channel vs helium density does not quite show the observed saturation. 21 references

In a previous study by the authors (Rout et al. in Metall Mater Trans B 49:537-557, 2018), a dynamic model for the BOF, employing the concept of multizone kinetics was developed. In the current study, the kinetics of decarburization reaction is investigated. The jet impact and slag-metal emulsion zones were identified to be primary zones for carbon oxidation. The dynamic parameters in the rate equation of decarburization such as residence time of metal drops in the emulsion, interfacial area evolution, initial size, and the effects of surface-active oxides have been included in the kinetic rate equation of the metal droplet. A modified mass-transfer coefficient based on the ideal Langmuir adsorption equilibrium has been proposed to take into account the surface blockage effects of SiO2 and P2O5 in slag on the decarburization kinetics of a metal droplet in the emulsion. Further, a size distribution function has been included in the rate equation to evaluate the effect of droplet size on reaction kinetics. The mathematical simulation indicates that decarburization of the droplet in the emulsion is a strong function of the initial size and residence time. A modified droplet generation rate proposed previously by the authors has been used to estimate the total decarburization rate by slag-metal emulsion. The model's prediction shows that about 76 pct of total carbon is removed by reactions in the emulsion, and the remaining is removed by reactions at the jet impact zone. The predicted bath carbon by the model has been found to be in good agreement with the industrially measured data.

In a previous study by the authors (Rout et al. in Metall Mater Trans B 49:537-557, 2018), a dynamic model for the BOF, employing the concept of multizone kinetics was developed. In the current study, the kinetics of decarburization reaction is investigated. The jet impact and slag-metal emulsion zones were identified to be primary zones for carbon oxidation. The dynamic parameters in the rate equation of decarburization such as residence time of metal drops in the emulsion, interfacial area evolution, initial size, and the effects of surface-active oxides have been included in the kinetic rate equation of the metal droplet. A modified mass-transfer coefficient based on the ideal Langmuir adsorption equilibrium has been proposed to take into account the surface blockage effects of SiO2 and P2O5 in slag on the decarburization kinetics of a metal droplet in the emulsion. Further, a size distribution function has been included in the rate equation to evaluate the effect of droplet size on reaction kinetics. The mathematical simulation indicates that decarburization of the droplet in the emulsion is a strong function of the initial size and residence time. A modified droplet generation rate proposed previously by the authors has been used to estimate the total decarburization rate by slag-metal emulsion. The model's prediction shows that about 76 pct of total carbon is removed by reactions in the emulsion, and the remaining is removed by reactions at the jet impact zone. The predicted bath carbon by the model has been found to be in good agreement with the industrially measured data.

To better understand adhesive interactions with wood, reactions between model compounds of wood and a model compound of polymeric methylene diphenyl diisocyanate (pMDI) were characterized by solution-state NMR spectroscopy. For comparison, finely ground loblolly pine sapwood, milled-wood lignin and holocellulose from the same wood were isolated and derivatized with...

The quantum chemical cluster approach is a powerful method for investigating enzymatic reactions. Over the past two decades, a large number of highly diverse systems have been studied and a great wealth of mechanistic insight has been developed using this technique. This Perspective reviews the current status of the methodology. The latest technical developments are highlighted, and challenges are discussed. Some recent applications are presented to illustrate the capabilities and progress of this approach, and likely future directions are outlined.

Full scale tests in a 12 MW fluidized bed combustor on reduction of N2O by secondary fuel injection are analyzed in terms a model that involves a detailed reaction mechanism for the gas phase chemistry as well as a description of gas-solid reactions.......Full scale tests in a 12 MW fluidized bed combustor on reduction of N2O by secondary fuel injection are analyzed in terms a model that involves a detailed reaction mechanism for the gas phase chemistry as well as a description of gas-solid reactions....

Cold-cap reactions are multiple overlapping reactions that occur in the waste-glass melter during the vitrification process when the melter feed is being converted to molten glass. In this study, we used differential scanning calorimetry (DSC) to investigate cold-cap reactions in a high-alumina high-level waste melter feed. To separate the reaction heat from both sensible heat and experimental instability, we employed the run/rerun method, which enabled us to define the degree of conversion based on the reaction heat and to estimate the heat capacity of the reacting feed. Assuming that the reactions are nearly independent and can be approximated by the nth order kinetics, we obtained the kinetic parameters using the Kissinger method combined with least squares analysis. The resulting mathematical simulation of the cold-cap reactions provides a key element for the development of an advanced cold-cap model

We investigated the distinctive shallow sub-surface hydrology of the southwest Western Australia (SWWA) dune calcarenite using observed rainfall and rainfall δ18O; soil moisture, cave drip rate and dripwater δ18O over a six-year period: August 2005-March 2012. A lumped parameter hydrological model is developed to describe water fluxes and drip δ18O. Comparison of observed data and model output allow us to assess the critical non-climatic karst hydrological processes that modify the precipitation δ18O signal and discuss the implications for speleothem paleoclimate records from this cave and those with a similar karst setting. Our findings include evidence of multiple reservoirs, characterised by distinct δ18O values and recharge responses ('low' and 'high' flow sites). Dripwaters exhibit δ18O variations in wet versus dry years at low-flow sites receiving diffuse seepage from the epikarst with an attenuated isotopic composition that approximates mean rainfall. Recharge from high-magnitude rain events is stored in a secondary reservoir which is associated with high-flow dripwater that is 1‰ lower than our monitored low-flow sites (δ18O). One drip site is characterised by mixed-flow behaviour and exhibits a non-linear threshold response after the cessation of drainage from a secondary reservoir following a record dry year (2006). Additionally, our results yield a better understanding of the vadose zone hydrology and dripwater characteristics in Quaternary age dune limestones. We show that flow to our monitored sites is dominated by diffuse flow with inferred transit times of less than one year. Diffuse flow appears to follow vertical preferential paths through the limestone reflecting differences in permeability and deep recharge into the host rock.

Family socio-economic factors and parents' health behaviours have been shown to have an impact on the health and well-being of children and adolescents. Family characteristics have also been associated with school nurses' concerns, which arose during health examinations, about children's and adolescents' physical health and psychosocial development. Parental smoking has also been associated with smoking in adolescents. The aim of this study was to determine to what extent school nurses' concerns about adolescents' physical health and psychosocial development related to family characteristics are mediated through parents' and adolescents' own health behaviours (smoking). A pathmodel approach using cross-sectional data was used. In 2008-2009, information about health and well-being of adolescents was gathered at health examinations of the Children's Health Monitoring Study. Altogether 1006 eighth and ninth grade pupils in Finland participated in the study. The associations between family characteristics, smoking among parents and adolescents and school nurses' concerns about adolescents' physical health and psychosocial development were examined using a structural equation model. Paternal education had a direct, and, through fathers' and boys' smoking, an indirect association with school nurses' concerns about the physical health of boys. Paternal labour market status and family income were only indirectly associated with concerns about the physical health of boys by having an effect on boys' smoking through paternal smoking, and a further indirect effect on concerns about boys' health. In girls, only having a single mother was strongly associated with school nurses' concerns about psychosocial development through maternal and adolescent girl smoking. Socio-economic family characteristics and parental smoking influence adolescent smoking and are associated with school nurses' concerns about adolescents' physical health and psychosocial development. The findings