Sample records for include calculable responsivity

We reformulate multistructural variational transition state theory by removing the approximation of calculating torsional anharmonicity only at stationary points. The multistructural method with torsional anharmonicity is applied to calculate the reaction-path free energy of the hydrogen abstraction from the carbon-1 position in isobutanol by OH radical. The torsional potential anharmonicity along the reaction path is taken into account by a coupled torsional potential. The calculations show that it can be critical to include torsional anharmonicity in searching for canonical and microcanonical variational transition states. The harmonic-oscillator approximation fails to yield reasonable free energy curves along the reaction path.

LTRACK is a first-order beam-transport code that includes wakefield effects up to quadrupole modes. This paper will introduce the readers to this computer code by describing the history, the method of calculations, and a brief summary of the input/output information. Future plans for the code will also be described

A tool for standardized calculation of solar collector performance has been developed in cooperation between SP Technical Research Institute of Sweden, DTU Denmark and SERC Dalarna University. The tool is designed to calculate the annual performance of solar collectors at representative locations...... can be tested and modeled as a thermal collector, when the PV electric part is active with an MPP tracker in operation. The thermal collector parameters from this operation mode are used for the PVT calculations....

A tool for standardized calculation of solar collector performance has been developed in cooperation between SP Technical Research Institute of Sweden, DTU Denmark and SERC Dalarna University. The tool is designed to calculate the annual performance of solar collectors at representative locations...

Finite element calculations have been performed to determine the structural response of waste-filled disposal rooms at the WIPP for a period of 10,000 years after emplacement of the waste. The calculations were performed to generate the porosity surface data for the final set of compliance calculations. The most recent reference data for the stratigraphy, waste characterization, gas generation potential, and nonlinear material response have been brought together for this final set of calculations

Full Text Available As it is stated in the ISO 18431-4 Standard, a Shock Response Spectrum is defined as the response to a given accelerationacting at a set of mass-damper-spring oscillators, which are adjusted to the different resonance frequencies while their resonancegains (Q-factor are equal to the same value. The maximum of the absolute value of the calculatedresponses as a function of theresonance frequencies compose the shock response spectrum (SRS. The paper will deal with employing Signal Analyzer, the softwarefor signal processing, for calculation of the SRS. The theory is illustrated by examples.

Purpose: To present analytical methods for calculating or estimating the integrated biological response in brachytherapy applications, and which allow for the presence of dose gradients. Methods and Materials: The approach uses linear-quadratic (LQ) formulations to identify an equivalent biologically effective dose (BED eq ) which, if applied to a specified tissue volume, would produce the same biological effect as that achieved by a given brachytherapy application. For simple geometrical cases, BED multiplying factors have been derived which allow the equivalent BED for tumors to be estimated from a single BED value calculated at a dose reference point. For more complex brachytherapy applications a voxel-by-voxel determination of the equivalent BED will be more accurate. Equations are derived which when incorporated into brachytherapy software would facilitate such a process. Results: At both high and low dose rates, the BEDs calculated at the dose reference point are shown to be lower than the true values by an amount which depends primarily on the magnitude of the prescribed dose; the BED multiplying factors are higher for smaller prescribed doses. The multiplying factors are less dependent on the assumed radiobiological parameters. In most clinical applications involving multiple sources, particularly those in multiplanar arrays, the multiplying factors are likely to be smaller than those derived here for single sources. The overall suggestion is that the radiobiological consequences of dose gradients in well-designed brachytherapy treatments, although important, may be less significant than is sometimes supposed. The modeling exercise also demonstrates that the integrated biological effect associated with fractionated high-dose-rate (FHDR) brachytherapy will usually be different from that for an 'equivalent' continuous low-dose-rate (CLDR) regime. For practical FHDR regimes involving relatively small numbers of fractions, the integrated biological effect to

The Dose-ResponseCalculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-ResponseCalculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-ResponseCalculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.

textabstractBackground: Risk prediction models for prostate cancer (PCa) have become important tools in reducing unnecessary prostate biopsies. The Prostate Health Index (PHI) may increase the predictive accuracy of such models. Objectives: To compare two PCa risk calculators (RCs) that include PHI.

The paper is devoted to three-point bending of an I-beam with include of transvers shear effect. Numerical calculations were conducted independently with the use of the SolidWorks system and the multi-purpose software package ANSYS The results of FEM study conducted with the use of two systems were compared and presented in tables and figures.

A new procedure for the calculation of spatial impulse responses for linear sound fields is introduced. This calculation procedure uses the well known technique of calculating the spatial impulse response from the intersection of a circle emanating from the projected spherical wave...

Hourly demand response tariffs with the intention of reducing or shifting loads during peak demand hours are being intensively discussed among policy-makers, researchers and executives of future electricity systems. Demand response rates have still low customer acceptance, apparently because the consumption habits requires stronger incentive to change than any proposed financial incentive. An hourly CO 2 intensity signal could give customers an extra environmental motivation to shift or reduce loads during peak hours, as it would enable co-optimisation of electricity consumption costs and carbon emissions reductions. In this study, we calculated the hourly dynamic CO 2 signal and applied the calculation to hourly electricity market data in Great Britain, Ontario and Sweden. This provided a novel understanding of the relationships between hourly electricity generation mix composition, electricity price and electricity mix CO 2 intensity. Load shifts from high-price hours resulted in carbon emission reductions for electricity generation mixes where price and CO 2 intensity were positively correlated. The reduction can be further improved if the shift is optimised using both price and CO 2 intensity. The analysis also indicated that an hourly CO 2 intensity signal can help avoid carbon emissions increases for mixes with a negative correlation between electricity price and CO 2 intensity. - Highlights: • We present a formula for calculating hybrid dynamic CO 2 intensity of electricity generation mixes. • We apply the dynamic CO 2 Intensity on hourly electricity market prices and generation units for Great Britain, Ontario and Sweden. • We calculate the spearman correlation between hourly electricity market price and dynamic CO 2 intensity for Great Britain, Ontario and Sweden. • We calculate carbon footprint of shifting 1 kWh load daily from on-peak hours to off-peak hours using the dynamic CO 2 intensity. • We conclude that using dynamic CO 2 intensity for

Numerical calculations of fast-wave current drive (FWCD) efficiency have generally been of two types: ray tracing or global wave calculations. Ray tracing shows that the projection of the wave number (k parallel) along the magnetic field can vary greatly over a ray trajectory, particularly when the launch point is above or below the equatorial plane. As the wave penetrates toward the center of the plasma, k parallel increases, causing a decrease in the parallel phase speed and a corresponding decrease in the current drive efficiency, γ. But the assumptions of geometrical optics, namely short wavelength and strong single-pass absorption, are not greatly applicable in FWCD scenarios. Eigenmode structure, which is ignored in ray tracing, can play an important role in determining electric field strength and Landau damping rates. In such cases, a full-wave or global solution for the wave fields is desirable. In full-wave calculations such as ORION k parallel appear as a differential operator (rvec B·∇) in the argument of the plasma dispersion function. Since this leads to a differential system of infinite order, such codes of necessity assume k parallel ∼ k var-phi = const, where k var-phi is the toroidal wave number. Thus, it is not possible to correctly include effects of the poloidal magnetic field on k parallel. The problem can be alleviated by expressing the electric field as a superposition of poloidal modes, in which case k parallel is purely algebraic. This paper describes a new full-wave calculation, Poloidal Ion Cyclotron Expansion Solution, which uses poloidal and toroidal mode expansions to solve the wave equation in general flux coordinates. The calculationincludes a full solution for E parallel and uses a reduced-order form of the plasma conductivity tensor to eliminate numerical problems associated with resolution of the very short wavelength ion Bernstein wave

This article draws on the literature of responsible innovation to suggest concrete processes for including rights holders in the “smart” agricultural revolution. It first draws upon historical agricultural research in Canada to highlight how productivist values drove seed innovations with particular consequences for the distribution of power in the food system. Next, the article uses document analysis to suggest that a similar value framework is motivating public investment in smart farming i...

Full Text Available This article draws on the literature of responsible innovation to suggest concrete processes for including rights holders in the “smart” agricultural revolution. It first draws upon historical agricultural research in Canada to highlight how productivist values drove seed innovations with particular consequences for the distribution of power in the food system. Next, the article uses document analysis to suggest that a similar value framework is motivating public investment in smart farming innovations. The article is of interest to smart farming’s decision makers (from farmers to governance actors and a broader audience – anyone interested in engendering equity through innovation-led societal transitions.

Using linear acoustics the emitted and scattered ultrasound field can be found by using spatial impulse responses as developed by Tupholme (1969) and Stepanishen (1971). The impulse response is calculated by the Rayleigh integral by summing the spherical waves emitted from all of the aperture...... of the emitting aperture. Summing the angles of the arcs within the aperture readily yields the spatial impulse response for a point in space. The approach makes is possible to make very general calculation routines for arbitrary, flat apertures in which the outline of the aperture is either analytically...... be used for finding analytic solutions to the spatial impulse response for new geometries of, for example, ellipsoidal shape. The approach also makes it easy to incorporate any apodization function and the effect from different transducers baffle mountings. Examples of spatial impulse responses...

Distorted-wave Born approximation (DWBA) calculations are reported for coplanar symmetric ionization of helium at energies of 100 and 200 eV. The best possible one-configuration incident distorted wave functions together with the capture scattering have been used to produce a better agreement with absolute measurements at 100 eV compared with the previous DWBA calculations. However the discrepancy between experiment and theory at 200 eV for large angles has not been resolved by these modifications. Moreover capture scattering has been found negligible at 28.6 to 200 eV. Similar DWBA calculations for hydrogen close to the threshold are also reported. Very good agreement with experiment has been found at 17.6 eV. 20 refs., 4 figs

We calculate the fusion reaction rates in molecules of hydrogen isotopes. The rates are calculated analytically (for the first time) as an asymptotic expansion in the ratio of the electron mass to the reduced mass of the nucleii. The fusion rates of the P-D, D-D, and D-T reactions are given for a variable electron mass by a simple analytic formula. However, we do not know any mechanism by which a sufficiently localized electron in solid can have an 'effective mass' large enough to explain the result of Fleischman and Pons (FP). This calculation indicates that P-D rates should exceed D-D rates for D-D fusion rates less than approximately 10 -23 per molecule per second. The D-D fusion rate is enhanced by a factor of 10 5 at 10,000 degree K if the excited vibrational states are populated with a Boltzmann distribution and the rotational excitations suppressed. The suggestion that experimental results could be explained by bombardment of cold deuterons by kilovolt deuterons is shown to be an unlikely from an energetic point of view. 12 refs., 3 figs., 1 tab

We present a path-integral Monte Carlo procedure for the fully quantum calculation of the second molecular virial coefficient accounting for intramolecular flexibility. This method is applied to molecular hydrogen (H{sub 2}) and deuterium (D{sub 2}) in the temperature range 15–2000 K, showing that the effect of molecular flexibility is not negligible. Our results are in good agreement with experimental data, as well as with virials given by recent empirical equations of state, although some discrepancies are observed for H{sub 2} between 100 and 200 K.

Large systems of interacting particles are often treated by assuming that the effect on any one particle of the remaining N-1 may be approximated by an average potential. This approach reduces the problem to that of finding the bound-state solutions for a particle in a potential; statistical mechanics is then used to obtain the properties of the many-body system. In some physical systems this approach may not be acceptable, because the two-body force component cannot be treated in this one-body limit. A technique for incorporating two-body forces in such calculations in a more realistic fashion is described. 1 figure

estimation, which includes the covariance matrix of four single equation residuals, improves the accuracy of age determination. The standard deviation, however, of age prediction remains 12.58 years. An experimental split of the data was made in order to demonstrate that the use of subgroups gives a false...

A new formulation is presented in this paper to solve the inverse kinetics equation. This method is based on the Laplace transform of the point kinetics equations, resulting in an expression equivalent to the inverse kinetics equation as a function of the power history. Reactivity can be written in terms of the summation of convolution with response to impulse, characteristic of a linear system. For its digital form the Z-transform is used, which is the discrete version of the Laplace transform. This new method of reactivity calculation has very special features, amongst which it can be pointed out that the linear part is characterized by a filter named finite impulse response (FIR). The FIR filter will always be, stable and non-varying in time, and, apart from this, it can be implemented in the non-recursive form. This type of implementation does not require feedback, allowing the calculation of reactivity in a continuous way.

A method has been outlined for calculating the flux front profile for a superconducting sample in either a uniform or nonuniform applied magnetic field possessing azimuthal symmetry. This technique relies upon finding a surface with zero vector potential. This surface is determined by simple integration of its derivative with respect to the external field, found by resolving a linear integral equation of the first kind. Measurement induced voltages and the entire hysteresis loop response can be found by extension of the ZFC magnetization response with increasing external field. Other experimentally measured quantities relating to the critical state can be calculated directly from the hysteresis loop if the time dependence of the external field is known. The technique shown in this report for solving the critical state model in the Bean approximation can be extended to field-dependent critical currents and other azimuthally symmetrical external fields. This work is presently in progress

estimation, which includes the covariance matrix of four single equation residuals, improves the accuracy of age determination. The standard deviation, however, of age prediction remains 12.58 years. An experimental split of the data was made in order to demonstrate that the use of subgroups gives a false...... impression of higher precision of age determination. The present study demonstrates that determination of age at death through microscopic bone morphometry is considerably less precise than generally stated in the literature.......Histomorphometric semi-automatic image analysis of cross-sections of 101 femoral diaphyseal bone sections were performed to reconsider to what degree osteon remodelling in the outer cortex is affected by age. The data were analysed statistically using the generalized least squares method. The model...

Flat 2024-t3 aluminum panels measuring 11 inches by 13 inches were tested in the near noise fields of a 4-inch air jet and turbojet engine. The stresses which were developed in the panels are compared with those calculated by generalized harmonic analysis. The calculated and measured stresses were found to be in good agreement. In order to make the stress calculations, supplementary data relating to the transfer characteristics, damping, and static response of flat and curved panels under periodic loading are necessary and were determined experimentally. In addition, an appendix containing detailed data on the near pressure field of the turbojet engine is included.

Improvements relative to the MC dose calculation speed have been made within the European project MAESTRO by the development of the fast MC code PENFAST and within the TELEDOS project by the parallelization of this code. This PhD work, based on these two projects, focuses on the evaluation of the technical and dosimetric performances of the MC code. These issues are crucial before the use of the MC code in clinical applications. First, variance reduction techniques included in the MC code as well as the parallelization of the calculation have been validated and evaluated in terms of gain in the computing time. The second part of this work has exposed a new, fast and accurate method to determine the initial energy spectrum of the accelerator. This spectrum is required for the MC dose calculation. Afterwards, dose calculations with the fast MC code PENFAST have been evaluated under metrological and clinical conditions. The results showed the ability of the MC code to quickly calculate an accurate dose in both photon and electron modes, even in electronic disequilibrium situations. However, this study revealed an uncertainty, in the TPS-MC, in the conversion of the CT image to voxelized geometry which is used for MC dose calculation. The quality of this voxelization may be improved through an artefacts correction software and by including additional materials in the database of the code. (author)

The third-order transfer matrices are calculated for an electrostatic toroidal sector condenser using a rigorously conserved matrix method that implies the conservation of the beam phase volume at each step in the calculations. The transfer matrices (matrizants) obtained, include the fringing-field effect due to the stray fields. In the case of a rectangular distribution of the field components along the optical axis, the analytical expressions for all aberration coefficients, including the dispersion ones, are derived accurate to the third-order terms. In simulations of real fields with the stray field width other than zero, a smooth distribution of the field components is used for which similar aberration coefficients were calculated by means of the conserved numerical method . It has been found that for a smooth model, as the stray field width tends to zero, the aberration coefficients approach the corresponding aberration values in the rectangular model.

The ICECON computer code provides a method for conservatively calculating the long term back pressure transient in the containment resulting from a hypothetical Loss-of-Coolant Accident (LOCA) for PWR plants including ice condenser containment systems. The ICECON computer code was developed from the CONTEMPT/LT-022 code. A brief discussion of the salient features of a typical ice condenser containment is presented. Details of the ice condenser models are explained. The corrections and improvements made to CONTEMPT/LT-022 are included. The organization of the code, including the calculational procedure, is outlined. The user's manual, to be used in conjunction with the CONTEMPT/LT-022 user's manual, a sample problem, a time-step study (solution convergence) and a comparison of ICECON results with the results of the NSSS vendor are presented. In general, containment pressure calculated with the ICECON code agree with those calculated by the NSSS vendor using the same mass and energy release rates to the containment

The aim of this study was to assess the accuracy of a risk calculator that includes renal function as compared with that of the traditional Framingham Risk Score (FRS) in predicting the risk of mortality of hypertensive individuals managed in primary care. From the databases of British and Italian General Practitioners, we retrieved demographic and clinical data for 35 101 UK and 27 818 Italian individuals aged 35-74 years with a diagnosis of hypertension. Then, the 5-year incidence of cardiovascular events as well as all-cause and cardiovascular mortality were recorded for both samples. A comparison analysis of the performance of the Individual Data Analysis of Antihypertensive Intervention Trials (INDANA) calculator with that of FRS in predicting 5-year all-cause and cardiovascular mortality risk was made. The INDANA calculator was more accurate than the FRS in predicting all-cause [Δc 0.038, 95% confidence interval (CI) 0.026-0.051 for United Kingdom, and 0.018, 95% CI 0.010-0.027 for Italy, both P calculator, 20% of the UK and 10% of the Italian patients were reclassified to higher risk classes for all-cause mortality, and 25 and 28%, respectively were reclassified when cardiovascular mortality was assessed (P calculator proved to be more accurate than the FRS in predicting the risk of mortality in hypertensive patients and should be considered for systematic adoption for risk stratification of hypertensive individuals managed in primary care.

Due to a number of causes (the finite number of toroidal field coils or the presence of concentrate blocks of magnetic materials, as the neutral beam shielding) the actual magnetic configuration in a Tokamak differs from the desired one. For example, a ripple is added to the ideal axisymmetric toroidal field, impacting the equilibrium and stability of the plasma column; as a further example the magnetic field out of plasma affects the operation of a number of critical components, included the diagnostic system and the neutral beam. Therefore the actual magnetic field has to be suitably calculated and his shape controlled within the required limits. Due to the complexity of its design, the problem is quite critical for the ITER project. In this paper the problem is discussed both from mathematical and numerical point of view. In particular, a complete formulation is proposed, taking into account both the presence of the non linear magnetic materials and the fully 3D geometry. Then the quality level requirements are discussed, included the accuracy of calculations and the spatial resolution. As a consequence, the numerical tools able to fulfil the quality needs while requiring reasonable computer burden are considered. In particular possible tools based on numerical FEM scheme are considered; in addition, in spite of the presence of non linear materials, the practical possibility to use Biot-Savart based approaches, as cross check tools, is also discussed. The paper also analyses the possible geometrical simplifications of the geometry able to make possible the actual calculation while guarantying the required accuracy. Finally the characteristics required for a correction system able to effectively counteract the magnetic field degradation are presented. Of course a number of examples will be also reported and commented. (author)

Greenhouse gas emissions (GHG) related to feed production is one of the hotspots in livestock production. The aim of this paper was to estimate the carbon footprint of different feedstuffs for dairy cattle using life cycle assessment (LCA). The functional unit was ‘1 kg dry matter (DM) of feed...... fodder crop, an individual production scheme was set up as the basis for calculating the carbon footprint (CF). In the calculations, all fodder crops were fertilized by artificial fertilizer based on the assumption that the environmental burden of using manure is related to the livestock production...... ready to feed’. Included in the study were fodder crops that are grown in Denmark and typically used on Danish cattle farms. The contributions from the growing, processing and transport of feedstuffs were included, as were the changes in soil carbon (soil C) and from land use change (LUC). For each...

A methodology that supports forced transient response dynamic solutions when both static and kinetic friction effects are included in a structural system model is described. Modifications that support this type of nonlinear transient response solution are summarized for the transient response dynamics (TRD) NASTRAN module. An overview of specific modifications for the NASTRAN processing subroutines, INITL, TRD1C, and TRD1D, are described with further details regarding inspection of nonlinear input definitions to define the type of nonlinear solution required, along with additional initialization requirements and specific calculation subroutines to successfully solve the transient response problem. The extension of the basic NASTRAN nonlinear methodology is presented through several stages of development to the point where constraint equations and residual flexibility effects are introduced into the finite difference Newmark-Beta recurrsion formulas. Particular emphasis is placed on cost effective solutions for large finite element models such as the Space Shuttle with friction degrees of freedom between the orbiter and payloads mounted in the cargo bay. An alteration to the dynamic finite difference equations of motion is discussed, which allows one to include friction effects at reasonable cost for large structural systems such as the Space Shuttle. Data are presented to indicate the possible impact of transient friction loads to the payload designer for the Space Shuttle. Transient response solution data are also included, which compare solutions without friction forces and those with friction forces for payloads mounted in the Space Shuttle cargo bay. These data indicate that payload components can be sensitive to friction induced loads.

Highlights: • 24 Λ–S states are correlated to the dissociation limit of Si( 3 P g ) + Si + ( 2 P u ) are first reported. • The dissociation energies of the calculated electronic states are predicted in our work. • It is first time that the entire 54 Ω states generated from the 24 Λ–S states have been studied. • PECs of Λ–S and Ω states are depicted with the aid of avoided crossing rule between the same symmetry. - Abstract: Ab initio all-electron relativistic calculations of the low-lying excited states of Si 2 + have been performed at MRCI+Q/AVQZ level. The calculated electronic states, including 12 doublet and 12 quartet Λ–S states, are correlated to the dissociation limit of Si( 3 P g ) + Si + ( 2 P u ). Spin–orbit interaction is taken into account via the state interaction approach with the full Breit-Pauli Hamiltonian, which causes the entire 24 Λ–S states to split into 54 Ω states. This is the first time that spin–orbit coupling (SOC) calculation has been performed on Si 2 + . The obtained potential energy curves (PECs) of Λ–S and Ω states are respectively depicted with the aid of the avoided crossing rule between the same symmetry. The spectroscopic constants of the bound Λ–S and Ω states are determined, and excellent agreements with the latest theoretical results are achieved

The primary purpose of Global Warming Potential (GWP) is to compare the effectiveness of emission strategies for various greenhouse gases to those for CO 2 , GWPs are quite sensitive to the amount of CO 2 . Unlike all other gases emitted in the atmosphere, CO 2 does not have a chemical or photochemical sink within the atmosphere. Removal of CO 2 is therefore dependent on exchanges with other carbon reservoirs, namely, ocean and terrestrial biosphere. The climatic-induced changes in ocean circulation or marine biological productivity could significantly alter the atmospheric CO 2 lifetime. Moreover, continuing forest destruction, nutrient limitations or temperature induced increases of respiration could also dramatically change the lifetime of CO 2 in the atmosphere. Determination of the current CO 2 sinks, and how these sinks are likely to change with increasing CO 2 emissions, is crucial to the calculations of GWPs. It is interesting to note that the impulse response function is sensitive to the initial state of the ocean-atmosphere system into which CO 2 is emitted. This is due to the fact that in our model the CO 2 flux from the atmosphere to the mixed layer is a nonlinear function of ocean surface total carbon

developed for calculation and synchronization purposes. The data exchange is realized by means of the Parallel Virtual Machine (PVM) software package. In this contribution, steady-state and transient results of a quarter of PWR fuel assembly with cold water injection are presented and compared with obtained results from a RELAP5/PARCS v2.7 coupled calculation. A simplified model for the spacers has been included. A methodology has been introduced to take into account the pressure drop and the turbulence enhancement produced by the spacers. (author)

been developed for calculation and synchronization purposes. The data exchange is realized by means of the Parallel Virtual Machine (PVM) software package. In this contribution, steady-state and transient results of a quarter of PWR fuel assembly with cold water injection are presented and compared with obtained results from a RELAP5/PARCS v2.7 coupled calculation. A simplified model for the spacers has been included. A methodology has been introduced to take into account the pressure drop and the turbulence enhancement produced by the spacers. (author)

A model has been developed to calculate the ground-state rotational populations of homonuclear diatomic molecules in kinetic gases, including the effects of electron-impact excitation, wall collisions, and gas feed rate. The equations are exact within the accuracy of the cross sections used and of the assumed equilibrating effect of wall collisions. It is found that the inflow of feed gas and equilibrating wall collisions can significantly affect the rotational distribution in competition with non-equilibrating electron-impact effects. The resulting steady-state rotational distributions are generally Boltzmann for N (ge) 3, with a rotational temperature between the wall and feed gas temperatures. The N = 0,1,2 rotational level populations depend sensitively on the relative rates of electron-impact excitation versus wall collision and gas feed rates.

The EMPLOY project aimed to help achieve the IEA-RETD’s objective to 'empower policy makers and energy market actors through the provision of information, tools and resources' by underlining the economic and industrial impacts of renewable energy technology deployment and providing reliable methodological approaches for employment – similar to those available for the incumbent energy technologies. The EMPLOY project resulted in a comprehensive set of methodological guidelines for estimating the employment impacts of renewable energy deployment in a coherent, uniform and systematic way. Guidelines were prepared for four different methodological approaches. In the introduction section of the guidelines policy makers are guided in their choice for the most suited approach, depending on the policy questions to be answered, the data availability and budget. The guidelines were tested for the IEA-RETD member state countries and Tunisia. The results of these calculations are included in the annex to the guidelines.

This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuation times for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies. (author)

In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec , for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec . The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm −1 . These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis. (paper)

A Fortran 77 routine has been developed to calculate confidence intervals with and without systematic uncertainties using a frequentist confidence interval construction with a Bayesian treatment of the systematic uncertainties. The routine can account for systematic uncertainties in the background prediction and signal/background efficiencies. The uncertainties may be separately parametrized by a Gauss, log-normal or flat probability density function (PDF), though since a Monte Carlo approach is chosen to perform the necessary integrals a generalization to other parameterizations is particularly simple. Full correlation between signal and background efficiency is optional. The ordering schemes for frequentist construction currently supported are the likelihood ratio ordering (also known as Feldman-Cousins) and Neyman ordering. Optionally, both schemes can be used with conditioning, meaning the probability density function is conditioned on the fact that the actual outcome of the background process can not have been larger than the number of observed events. Program summaryTitle of program: POLE version 1.0 Catalogue identifier: ADTA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTA Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Computer for which the program is designed: DELL PC 1 GB 2.0 Ghz Pentium IV Operating system under which the program has been tested: RH Linux 7.2 Kernel 2.4.7-10 Programming language used: Fortran 77 Memory required to execute with typical data: ˜1.6 Mbytes No. of bytes in distributed program, including test data, etc.: 373745 No. of lines in distributed program, including test data, etc.: 2700 Distribution format: tar gzip file Keywords: Confidence interval calculation, Systematic uncertainties Nature of the physical problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with known background in presence of

The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T) method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n ( M = Ti, Zr, Hf, n = 1 – 4 ) and (MO3)n ( M = Cr, Mo, W, n = 1 – 3) clusters have been benchmarked with 55 exchange-correlation DFT functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs, and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer.

The new R-matrix package for comprehensive close-coupling calculations for electron scattering with the first three ions in the boron isoelectronic sequence, the astrophysically significant C(+), N(2+), and O(3+), is presented. The collision strengths are calculated in the LS coupling approximation, as well as in pair-coupling scheme, for the transitions among the fine-structure sublevels. Calculations are carried out at a large number of energies in order to study the detailed effects of autoionizing resonances.

Direct mathematical methods to calculate total and full-energy peak (photopeak) efficiencies, coincidence correction factors and the source self-absorption of a closed end coaxial HPGe detector for Marinelli beaker sources have been derived. The source self-absorption is determined by calculating the photon path length in the source volume. The attenuation of photons by the Marinelli beaker and the detector cap materials is also calculated. In the experiments gamma aqueous sources containing several radionuclides covering the energy range from 60 to 1836 keV were used. By comparison, the theoretical and experimental full-energy peak efficiency values are in good agreement

An algorithm for the inclusion of both Dirac phenomenological potentials and an exact treatment of finite-range effects within the DWBA is presented. The numerical implementation of this algorithm is used to calculate low-energy deuteron stripping cross sections, analyzing powers, and polarizations. These calculations are compared with experimental data where available. The impact of using several commonly employed nuclear potentials (Reid soft-core, Bonn, Argonne v18) for the internal deuteron wave function is also examined.

We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

We have introduced a theoretical scheme for the efficient description of the optical response of a cluster based on the time-dependent density functional theory. The practical implementation is done by means of the fully fledged time-dependent local density approximation scheme, which is solved directly in the time domain without any linearization. As an example we consider the simple Na2 cluster and compute its surface plasmon photoabsorption cross section, which is in good agreement with the experiments.

The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural responsecalculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural responsecalculations that support development of the seismic scenario for the

When delivered over a specific cortical site, TMS can temporarily disrupt the ongoing process in that area. This allows mapping of speech-related areas for preoperative evaluation purposes. We numerically explore the observed variability of TMS responses during a speech mapping experiment performed with a neuronavigation system. We selected four cases with very small perturbations in coil position and orientation. In one case (E) a naming error occurred, while in the other cases (NEA, B, C) the subject appointed the images as smoothly as without TMS. A realistic anisotropic head model was constructed of the subject from T1-weighted and diffusion-weighted MRI. The induced electric field distributions were computed, associated to the coil parameters retrieved from the neuronavigation system. Finally, the membrane potentials along relevant white matter fibre tracts, extracted from DTI-based tractography, were computed using a compartmental cable equation. While only minor differences could be noticed between the induced electric field distributions of the four cases, computing the corresponding membrane potentials revealed different subsets of tracts were activated. A single tract was activated for all coil positions. Another tract was only triggered for case E. NEA induced action potentials in 13 tracts, while NEB stimulated 11 tracts and NEC one. The calculated results are certainly sensitive to the coil specifications, demonstrating the observed variability in this study. However, even though a tract connecting Broca’s with Wernicke’s area is only triggered for the error case, further research is needed on other study cases and on refining the neural model with synapses and network connections. Case- and subject-specific modelling that includes both electromagnetic fields and neuronal activity enables demonstration of the variability in TMS experiments and can capture the interaction with complex neural networks.

With the recent development of a new computational tool for calculations of nuclear reactors based on the coupling between the PARCS neutron transport code and computational fluid dynamics commercial code (CFD) ANSYS CFX opens new possibilities in the fuel element design that contributes to a better understanding and a better simulation of the processes of heat transfer and specific phenomena of fluid dynamics as the c rossflow .

We give a method to obtain the quasiparticle band structure and renormalized density of states by diagonalizing the interacting system Green function. This method operates for any self-energy approximation appropriated to strongly correlated systems. Application to CeSi{sub 2} and YBa{sub 2}Cu{sub 3}O{sub 7} is analyzed as a probe for this band calculation method. {copyright} {ital 1996 The American Physical Society.}

The paper presents the experimental results obtained at Saclay on a HTGR core model and comparisons with analytical results. Two series of horizontal tests have been performed on the shaking table VESUVE: sinusoidal test and time history response. Acceleration of graphite blocks, forces on the boundaries, relative displacement of the core and PCRB model, impact velocity of the blocks on the boundaries were recorded. These tests have shown the strongly non-linear dynamic behaviour of the core. The resonant frequency of the core is dependent on the level of the excitation. These phenomena have been explained by a computer code, which is a lumped mass non-linear model. Good correlation between experimental and analytical results was obtained for impact velocities and forces on the boundaries. This comparison has shown that the damping of the core is a critical parameter for the estimation of forces and velocities. Time history displacement at the level of PCRV was reproduced on the shaking table. The analytical model was applied to this excitation and good agreement was obtained for forces and velocities. (orig./HP) [de

The complex structural dynamic behavior of alternator must be well understood in order to insure their reliable and safe operation. The numerical model is however difficult to construct mainly due to the presence of a high level of uncertainty. The objective of this work is to provide decision support tools in order to assess the vibratory levels in operation before to restart the alternator. Based on info-gap theory, a first decision support tool is proposed: the objective here is to assess the robustness of the dynamical response to the uncertain modal model. Based on real data, the calibration of an info-gap model of uncertainty is also proposed in order to enhance its fidelity to reality. Then, the extended constitutive relation error is used to expand identified mode shapes which are used to assess the vibratory levels. The robust expansion process is proposed in order to obtain robust expanded mode shapes to parametric uncertainties. In presence of lack-of knowledge, the trade-off between fidelity-to-data and robustness-to-uncertainties which expresses that robustness improves as fidelity deteriorates is emphasized on an industrial structure by using both reduced order model and surrogate model techniques. (author)

This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

Purpose: To design a versatile, nonhomogeneous insert for the dose verification phantom ArcCHECK ™ (Sun Nuclear Corp., FL) and to demonstrate its usefulness for the verification of dose distributions in inhomogeneous media. As an example, we demonstrate it can be used clinically for routine quality assurance of two volumetric modulated arc therapy (VMAT) systems for lung stereotactic body radiation therapy (SBRT): SmartArc ® (Pinnacle 3 , Philips Radiation Oncology Systems, Fitchburg, WI) and RapidArc ® (Eclipse ™ , Varian Medical Systems, Palo Alto, CA). Methods: The cylindrical detector array ArcCHECK ™ has a retractable homogeneous acrylic insert. In this work, we designed and manufactured a customized heterogeneous insert with densities that simulate soft tissue, lung, bone, and air. The insert offers several possible heterogeneity configurations and multiple locations for point dose measurements. SmartArc ® and RapidArc ® plans for lung SBRT were generated and copied to ArcCHECK ™ for each inhomogeneity configuration. Dose delivery was done on a Varian 2100 ix linac. The evaluation of dose distributions was based on gamma analysis of the diode measurements and point doses measurements at different positions near the inhomogeneities. Results: The insert was successfully manufactured and tested with different measurements of VMAT plans. Dose distributions measured with the homogeneous insert showed gamma passing rates similar to our clinical results (∼99%) for both treatment-planning systems. Using nonhomogeneous inserts decreased the passing rates by up to 3.6% in the examples studied. Overall, SmartArc ® plans showed better gamma passing rates for nonhomogeneous measurements. The discrepancy between calculated and measured point doses was increased up to 6.5% for the nonhomogeneous insert depending on the inhomogeneity configuration and measurement location. SmartArc ® and RapidArc ® plans had similar plan quality but RapidArc ® plans had

The flow field in supersonic mixed compression aircraft inlets at angle of attack is calculated. A zonal modeling technique is employed to obtain the solution which divides the flow field into different computational regions. The computational regions consist of a supersonic core flow, boundary layer flows adjacent to both the forebody/centerbody and cowl contours, and flow in the shock wave boundary layer interaction regions. The zonal modeling analysis is described and some computational results are presented. The governing equations for the supersonic core flow form a hyperbolic system of partial differential equations. The equations for the characteristic surfaces and the compatibility equations applicable along these surfaces are derived. The characteristic surfaces are the stream surfaces, which are surfaces composed of streamlines, and the wave surfaces, which are surfaces tangent to a Mach conoid. The compatibility equations are expressed as directional derivatives along streamlines and bicharacteristics, which are the lines of tangency between a wave surface and a Mach conoid.

Eddy covariance and surface renewal measurements were used to estimate evapotranspiration (ET) over a variety of crop fields in the Sacramento-San Joaquin River Delta during the 2016 growing season. However, comparing and evaluating multiple measurement systems and methods for determining ET was focused upon at a single alfalfa site. The eddy covariance systems included two systems for direct measurement of latent heat flux: one using a separate sonic anemometer and an open path infrared gas analyzer and another using a combined system (Campbell Scientific IRGASON). For these methods, eddy covariance was used with measurements from the Campbell Scientific CSAT3, the LI-COR 7500a, the Campbell Scientific IRGASON, and an additional R.M. Young sonic anemometer. In addition to those direct measures, the surface renewal approach included several energy balance residual methods in which net radiation, ground heat flux, and sensible heat flux (H) were measured. H was measured using several systems and different methods, including using multiple fast-response thermocouple measurements and using the temperatures measured by the sonic anemometers. The energy available for ET was then calculated as the residual of the surface energy balance equation. Differences in ET values were analyzed between the eddy covariance and surface renewal methods, using the IRGASON-derived values of ET as the standard for accuracy.

The present work is focused on the reconstruction of a neutron spectra using a multisphere spectrometer also called Bonner Spheres System (BSS). To that, the determination of the response detector curves is necessary therefore we have obtained the response matrix of a neutron detector by Monte Carlo (MC) simulation with MCNP6 where the use of unstructured mesh geometries is introduced as a novelty. The aim of these curves was to study the theoretical response of a widespread neutron spectrometer exposed to neutron radiation. A neutron detector device has been used in this work which is formed by a multispheres spectrometer (BSS) that uses 6 high density polyethylene spheres with different diameters. The BSS consists of a set of 0.95 g/cm{sup 3} high density polyethylene spheres. The detector is composed of a lithium iodide 6LiI cylindrical scintillator crystal 4mm x 4mm size LUDLUM Model 42 coupled to a photomultiplier tube. Thermal tables are required to include polyethylene cross section in the simulation. These data are essential to get correct and accurate results in problems involving neutron thermalization. Nowadays available literature present the response matrix calculated with ENDF.B.V cross section libraries (V.Mares et al 1993) or with ENDF.B.VI (R.Vega Carrillo et al 2007). This work uses two novelties to calculate the response matrix. On the one hand the use of unstructured meshes to simulate the geometry of the detector and the Bonner Spheres and on the other hand the use of the updated ENDF.B.VII cross sections libraries. A set of simulations have been performed to obtain the detector response matrix. 29 mono energetic neutron beams between 10 KeV to 20 MeV were used as source for each moderator sphere up to a total of 174 simulations. Each mono energetic source was defined with the same diameter as the moderating sphere used in its corresponding simulation and the spheres were uniformly irradiated from the top of the photomultiplier tube. Some

Full Text Available Drought negatively impacts plant growth and the productivity of crops around the world. Understanding the molecular mechanisms in the drought response is important for improvement of drought tolerance using molecular techniques. In plants, abscisic acid (ABA is accumulated under osmotic stress conditions caused by drought, and has a key role in stress responses and tolerance. Comprehensive molecular analyses have shown that ABA regulates the expression of many genes under osmotic stress conditions, and the ABA-responsive element (ABRE is the major cis-element for ABA-responsive gene expression. Transcription factors (TFs are master regulators of gene expression. ABRE-binding protein (AREB and ABRE-binding factor (ABF TFs control gene expression in an ABA-dependent manner. SNF1-related protein kinases 2, group A 2C-type protein phosphatases, and ABA receptors were shown to control the ABA signaling pathway. ABA-independent signaling pathways such as dehydration-responsive element-binding protein (DREB TFs and NAC TFs are also involved in stress responsesincluding drought, heat and cold. Recent studies have suggested that there are interactions between the major ABA signaling pathway and other signaling factors in stress responses. The important roles of these transcription factors in crosstalk among abiotic stress responses will be discussed. Control of ABA or stress signaling factor expression can improve tolerance to environmental stresses. Recent studies using crops have shown that stress-specific overexpression of TFs improves drought tolerance and grain yield compared with controls in the field.

The authors present the theory of the electron propagator perturbed by an external electric field and show how it can be used to calculate a variety of one-electron linear response properties that are accurate through second order in electron correlation. Some illustrative calculations are discussed.

Conventional adaptive cancellation systems using traditional transverse finite impulse response (FIR) filters, together with least mean square (LMS) adaptive algorithms, well known in active noise control, are slow to adapt to primary source changes. This makes them inappropriate for cancelling rapidly changing noise, including unpredictable noise such as speech and music. Secondly, the cancelling structures require considerable computational processing effort to adapt to primary source and plant changes, particularly for multi-channel systems. This paper describes methods to increase the adaptive speed to primary source changes in large enclosed spaces and outdoor environments. A method is described that increases the response to time varying periodic noise using traditional transverse FIR filters. Here a multi-passband filter, with individual variable adaptive step sizes for each passband is automatically adjusted according to the signal level in each band. This creates a similar adaptive response for all frequencies within the total pass-band, irrespective of amplitude, minimizing the signal distortion and increasing the combined adaptive speed. Unfortunately, there is a limit to the adaptive speed using the above method as classical transverse FIR filters have a finite adaptive speed given by the stability band zero bandwidth. For rapidly changing periodic noise and unpredictable non-stationary noise, a rapid to instantaneous response is required. In this case the on-line adaptive FIR filters are dispensed with and replaced by a time domain solution that gives virtually instantaneous cancellation response (infinite adaptive speed) to primary source changes, and is computationally efficient.

This Panel Session consisted of five country reports (India, Indonesia, Maldives, Thailand, and Nepal) and the common issues identified during the Panel discussions relative to seismic events in the Southeast Asia Region. Important issues identified included the needs for: (1) a legal framework upon which to base preparedness and response; (2) coordination between the many organizations involved; (3) early warning systems within and between countries; (4) command and control; (5) access to resources including logistics; (6) strengthening the health infrastructure; (7) professionalizing the field of disaster medicine and management; (8) management of communications and information; (9) management of dead bodies; and (10) mental health of the survivors and health workers.

We present a semi-classical calculation of the nuclear response functions beyond the Thomas-Fermi approximation. We apply our formalism to the spin-isospin responses and show that the surface peaked h/2π corrections considerably decrease the ratio longitudinal/transverse as obtained through hadronic probes

at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four......This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a high-dimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data...... turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project...

A canonical-basis formulation of the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory is obtained with an approximation that the pair potential is assumed to be diagonal in the time-dependent canonical basis. The canonical-basis formulation significantly reduces the computational cost. We apply the method to linear-responsecalculations for even-even nuclei. E1 strength distributions for proton-rich Mg isotopes are systematically calculated. The calculation suggests strong Landau damping of giant dipole resonance for drip-line nuclei.

The paper deals with the effect of thermal bonds on heat transmission of a building envelope. Then it deals with ways to include of thermal bonds in the calculation of heat loss through the building envelope and the calculation of energy efficiency of buildings. A solution of thermal bonds is very important, because it fundamentally influences the energy efficiency of the buildings. It is important to realize that building envelope comprises not only the peripheral surface structures but also thermal bonds in areas where building structures join.

We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.

. Such measures include damage to the confinement, the velocity and fragment size distributions from what was the confinement, and air blast. In the first phase (advisory) model described in [1], the surface to volume ratio and the ignition parameter are calibrated by comparison with experiments using the UK explosive. In order to achieve the second phase (interactive) model, and so calculate the pressure developed and the velocity imparted to the confinement, we need to calculate the spread of the ignition front, the subsequent burn behavior behind that front, and the response of unburned and partially burned explosive to pressurization. A preliminary model to do such calculations is described here.

The effects of the 4f shell of electrons and the relativity of valence electrons are compared. The effect of 4f shell (lanthanide contraction) is estimated from the numerical Hartree-Fock (HF) calculations of pseudo-atoms corresponding to Hf, Re, Au, Hg, Tl, Pb and Bi without 4f electrons and with atomic numbers reduced by 14. The relativistic effect estimated from the numerical Dirac-Hartree-Fock (DHF) calculations of those atoms is comparable in the magnitude with that of the 4f shell of electrons. Both are larger for 6s than for 5d or 6p electrons. The various relativistic effects on valence electrons are discussed in detail to determine the proper level of the approximation for the valence electron calculations of systems with heavy elements. An effective core potential system has been developed for heavy atoms in which relativistic effects are included in the effective potentials.

Basic issues of the time-dependent density-functional theory are discussed, especially on the real-time calculation of the linear response functions. Some remarks on the derivation of the time-dependent Kohn-Sham equations and on the numerical methods are given.

In rotary machines, at the time of earthquakes, whether the rotating part and stationary part touch or whether the bearings and seals are damaged or not are problems. In order to examine these problems, it is necessary to analyze the seismic response of a rotary shaft or sometimes a casing system. But the conventional analysis methods are unsatisfactory. Accordingly, in the case of a general shaft system supported with slide bearings and on which gyro effect acts, complex modal method must be used. This calculation method is explained in detail in the book of Lancaster, however, when this method is applied to the seismic response of rotary shafts, the calculation time is considerably different according to the method of final integration. In this study, good results were obtained when the method which did not depend on numerical integration was attempted. The equation of motion and its solution, the displacement vector of a foundation, the verification of the calculation program and the example of calculating the seismic response of two coupled rotor shafts are reported. (Kako, I.)

The Metal -Oxide Semiconductor Field-Effect-Transistor (MOSFET, RadFET) is frequently used as a sensor of ionizing radiation in nuclear-medicine, diagnostic-radiology, radiotherapy quality-assurance and in the nuclear and space industries. We focused our investigations on calculating the energy response of a p-type RadFET to low-energy photons in range from 12 keV to 2 MeV and on understanding the influence of uncertainties in the composition and geometry of the device in calculating the energy response function. All results were normalized to unit air kerma incident on the RadFET for incident photon energy of 1.1 MeV. The calculations of the energy response characteristics of a RadFET radiation detector were performed via Monte Carlo simulations using the MCNPX code and for a limited number of incident photon energies the FOTELP code was also used for the sake of comparison. The geometry of the RadFET was modeled as a simple stack of appropriate materials. Our goal was to obtain results with statistical uncertainties better than 1% (fulfilled in MCNPX calculations for all incident energies which resulted in simulations with 1 - 2×109 histories.

Simulations of the hydrogen storage capacities of nanoporous carbons require an accurate treatment of the interaction of the hydrogen molecule with the graphite-like surfaces of the carbon pores, which is dominated by the dispersion forces. These interactions are described accurately by high level quantum chemistry methods, like the Coupled Cluster method with single and double excitations and a non-iterative correction for triple excitations (CCSD(T)), but those methods are computationally very expensive for large systems and for massive simulations. Density functional theory (DFT)-based methods that include dispersion interactions at different levels of complexity are less accurate, but computationally less expensive. In order to find DFT-methods that include dispersion interactions to calculate the physisorption of H 2 on benzene and graphene, with a reasonable compromise between accuracy and computational cost, CCSD(T), Møller-Plesset second-order perturbation theory method, and several DFT-methods have been used to calculate the interaction energy curves of H 2 on benzene and graphene. DFT calculations are compared with CCSD(T) calculations, in the case of H 2 on benzene, and with experimental data, in the case of H 2 on graphene. Among the DFT methods studied, the B97D, RVV10, and PBE+DCACP methods yield interaction energy curves of H 2 -benzene in remarkable agreement with the interaction energy curve obtained with the CCSD(T) method. With regards to graphene, the rev-vdW-DF2, PBE-XDM, PBE-D2, and RVV10 methods yield adsorption energies of the lowest level of H 2 on graphene, very close to the experimental data.

A parametric sensitivity analysis is carried out on GASCON, a radiological impact software describing the radionuclides transfer to the man following a chronic gas release of a nuclear facility. An effective dose received by age group can thus be calculated according to a specific radionuclide and to the duration of the release. In this study, we are concerned by 18 output variables, each depending of approximately 50 uncertain input parameters. First, the generation of 1000 Monte-Carlo simulations allows us to calculate correlation coefficients between input parameters and output variables, which give a first overview of important factors. Response surfaces are then constructed in polynomial form, and used to predict system responses at reduced computation time cost; this response surface will be very useful for global sensitivity analysis where thousands of runs are required. Using the response surfaces, we calculate the total sensitivity indices of Sobol by the Monte-Carlo method. We demonstrate the application of this method to one site of study and to one reference group near the nuclear research Center of Cadarache (France), for two radionuclides: iodine 129 and uranium 238. It is thus shown that the most influential parameters are all related to the food chain of the goat's milk, in decreasing order of importance: dose coefficient 'effective ingestion', goat's milk ration of the individuals of the reference group, grass ration of the goat, dry deposition velocity and transfer factor to the goat's milk

This paper presents a general method to compute the response of a rigid foundation of arbitrary shape resting on a homogeneous or multilayered elastic soil when subjected to a spatially varying ground motion. The foundation response is calculated from the free-field ground motion and the contact tractions between the foundation and the soil. The spatial variation of ground motion in this study is introduced by a coherence function and the contact tractions are obtained numerically using the Finite Element Method in the process of calculating the dynamic compliance of the foundation. Applications of this method to a massless rigid disc supported on an elastic half space and to that founded on an elastic medium consisting of a layer of constant thickness supported on an elastic half space are described. The numerical results obtained are in very good agreement with analytical solutions published in the literature. (authors). 5 refs., 8 figs

Neuronal response onset latency provides important data on the information processing within the central nervous system. In order to enhance the quality of the onset latency estimation, we have developed a 'double sliding-window' technique, which combines the advantages of mathematical methods with the reliability of standard statistical processes. This method is based on repetitive series of statistical probes between two virtual time windows. The layout of the significance curve reveals the starting points of changes in neuronal activity in the form of break-points between linear segments. A second-order difference function is applied to determine the position of maximum slope change, which corresponds to the onset of the response. In comparison with Poisson spike-train analysis, the cumulative sum technique and the method of Falzett et al., this 'double sliding-window, technique seems to be a more accurate automated procedure to calculate the response onset latency of a broad range of neuronal response characteristics.

Full Text Available The information of seismic response spectra is key to many problems concerned with aseismic structure and is also helpful for earthquake disaster relief if it is generated in time when earthquake happens. While current numerical calculation methods suffer from poor precision, especially in frequency band near Nyquist frequency, we present a set of improved parameters for precision improvement. It is shown that precision of displacement and velocity response spectra are both further improved compared to current numerical algorithms. A uniform fitting formula is given for computing these parameters for damping ratio range of 0.01–0.9, quite convenient for practical application.

The analysis of (e,e'n) experiments at the Darmstadt superconducting electron linear accelerator S-DALINAC required the calculation of neutron response functions for the NE213 liquid scintillation detectors used. In an open geometry, these response functions can be obtained using the Monte Carlo codes NRESP7 and NEFF7. However, for more complex geometries, an extended version of the Monte Carlo code MCNP exists. This extended version of the MCNP code was improved upon by adding individual light-output functions for charged particles. In addition, more than one volume can be defined as a scintillator, thus allowing the simultaneous calculation of the response for multiple detector setups. With the implementation of sup 1 sup 2 C(n,n'3 alpha) reactions, all relevant reactions for neutron energies E sub n <20 MeV are now taken into consideration. The results of these calculations were compared to experimental data using monoenergetic neutrons in an open geometry and a sup 2 sup 5 sup 2 Cf neutron source in th...

Linear scaling density matrix perturbation theory [A. M. N. Niklasson and M. Challacombe, Phys. Rev. Lett. 92, 193001 (2004)] is extended to basis-set-dependent quantum responsecalculations for a nonorthogonal basis set representation. The generalization is achieved by a perturbation-dependent congruence transform, derived from the factorization of the inverse overlap matrix, which transforms the generalized eigenvalue problem to an orthogonal, standard form. With this orthogonalization transform the basis-set-dependent perturbation in the overlap matrix is included in the orthogonalized Hamiltonian, which is expanded in orders of the perturbation. In this way density matrix perturbation theory developed for an orthogonal representation can be applied also to basis-set-dependent responsecalculations. The method offers an alternative to the previous solution of the basis-set-dependent response problem, based on a nonorthogonal generalization of the density matrix perturbation theory, where the calculations are performed within a purely nonorthogonal setting [A. M. N. Niklasson et al., J. Chem. Phys. 123, 44107 (2005)].

The Creole goat is a local breed used for meat production in Guadeloupe (French West Indies). As in other tropical countries, improvement of parasite resistance is needed. In this study, we compared predicted selection responses for alternative breeding programs with or without parasites resistance

the intensive use of distributed generation and V2G. The main focus is the comparison of different EV management approaches in the day-ahead energy resources management, namely uncontrolled charging, smart charging, V2G and Demand Response (DR) programs in the V2G approach. Three different DR programs...

The present study aimed to explore the impact of the combination of two pedagogical models, Sport Education and Teaching for Personal and Social Responsibility, for learners with disabilities experiencing a contactless kickboxing learning unit. Twelve secondary education students agreed to participate. Five had disabilities (intellectual and…

The three one-dimensional conservation equations of mass, momentum and energy are solved by a stable finite difference scheme which allows the time step to be varied in response to accuracy requirements. Consideration of numerical stability is not necessary. Slip between the phases is allowed and descriptions of complex hydraulic components can be added into specially provided user routines. Intrinsic choking using any of the nine slip models is possible. A pipe or fuel model and detailed surface heat transfer are included. (author)

First-principles NaI and BGO detector response functions calculations made with the MCNP code are compared to measurements. Excellent agreement is achieved for the experiments analyzed. Such calculational methodology can be used to achieve a better understanding of the physics of detector response and to maximize the information content available from measured data

Highlights: ► Both the slip and separation of reactor base reduce with increase in embedment. ► The slip and separation become insignificant beyond 1/4 and 1/2 embedment respectively. ► The stresses in reactor reduce significantly upto 1/4 embedment. ► The stress reduction with embedment is more pronounced in case of tensile stresses. ► The modeling of interface is important beyond 1/8 embedment as stresses are underestimated otherwise. - Abstract: The seismic response of nuclear reactor containment building considering the effects of embedment, slip and separation at soil–structure interface requires modeling of the soil, structure and interface altogether. Slip and separation at the interface causes stress redistribution in the soil and the structure around the interface. The embedment changes the dynamic characteristics of the soil–structure system. Consideration of these aspects allows capturing the realistic response of the structure, which has been a research gap and presented here individually as well as taken together. Finite element analysis has been carried out in time domain to attempt the highly nonlinear problem. The study draws important conclusions useful for design of nuclear reactor containment building.

We use the natural orbitals to define an independent particle system, from which the exact one-particle density matrix can be obtained with an ensemble of degenerate determinantal ground states. Also defining explicit phases for the orbitals, and admitting functionals that are dependent on those phases, time-dependent equations for the orbitals and occupation numbers are obtained from an action principle. The wrong polarizability and lack of double excitations of straightforward adiabatic time-dependent density matrix functional theory are then corrected, and the important symmetry χ(ω)=χ{*}(-ω), lost in previous ad hoc improvements, is restored. The extension of the responsecalculations beyond the occupied-virtual pairs, which are the only ones admitted in time-dependent density functional theory, leads to greatly improved response properties.

Although an impact noise level is objectively evaluated the same according to current standards, a lightweight floor structure is often subjectively judged more annoying than a heavy homogeneous structure. The hypothesis of the present investigation is that the subjective judgment of impact noise...... is more annoying if the source position can be localized; lightweight structures have a more localized radiation than heavy structures. For the heavy structures the reverberant vibration field is dominant, therefore having a distributed radiation. A listening test is used to assess the subjective...... annoyance, using simulated binaural room impulse responses, with sources being a moving point source or a non-moving surface source, and rooms being a room with a reverberation time of 0.5 s or an anechoic room. The paper concludes that no strong effect of the source localization on the annoyance can...

Although an impact noise level is objectively evaluated the same according to current standards, a lightweight floor structure is often subjectively judged more annoying than a heavy homogeneous structure. The hypothesis of the present investigation is that the subjective judgment of impact noise...... is more annoying if the source position can be localized; lightweight structures have a more localized radiation than heavy structures. For the heavy structures the reverberant vibration field is dominant, therefore having a distributed radiation. A listening test is used to assess the subjective...... annoyance, using simulated binaural room impulse responses, with sources being a moving point source or a nonmoving surface source, and rooms being a room with a reverberation time of 0.5 s or an anechoic room. The paper concludes that no strong effect of the source localization on the annoyance can...

Realist methods are increasingly being used to investigate complex public health problems. Despite the extensive evidence base clarifying the built environment as a determinant of health, there is limited knowledge about how and why land-use planning systems take on health concerns. Further, the body of research related to the wider determinants of health suffers from not using political science knowledge to understand how to influence health policy development and systems. This 4-year funded programme of research investigates how the land-use planning system in New South Wales, Australia, incorporates health and health equity at multiple levels. The programme uses multiple qualitative methods to develop up to 15 case studies of different activities of the New South Wales land-use planning system. Comparison cases from other jurisdictions will be included where possible and useful. Data collection includes publicly available documentation and purposively sampled stakeholder interviews and focus groups of up to 100 participants across the cases. The units of analysis in each case are institutional structures (rules and mandates constraining and enabling actors), actors (the stakeholders, organisations and networks involved, including health-focused agencies), and ideas (policy content, information, and framing). Data analysis will focus on and develop propositions concerning the mechanisms and conditions within and across each case leading to inclusion or non-inclusion of health. Data will be refined using additional political science and sociological theory. Qualitative comparative analysis will compare cases to develop policy-relevant propositions about the necessary and sufficient conditions needed to include health issues. Ethics has been approved by Sydney University Human Research Ethics Committee (2014/802 and 2015/178). Given the nature of this research we will incorporate stakeholders, often as collaborators, throughout. We outline our research translation

To determine dose rate in a gamma radiation field, based on measurements with a semiconductor detector, it is necessary to know how the detector effects the field. This work aims to describe this effect with Monte Carlo simulations and calculations, that is to identify the detector response function. This is done for a germanium gamma detector. The detector is normally used in the in-situ measurements that is carried out regularly at the department. After the response function is determined it is used to reconstruct a spectrum from an in-situ measurement, a so called unfolding. This is done to be able to calculate fluence rate and dose rate directly from a measured (and unfolded) spectrum. The Monte Carlo code used in this work is EGS4 developed mainly at Stanford Linear Accelerator Center. It is a widely used code package to simulate particle transport. The results of this work indicates that the method could be used as-is since the accuracy of this method compares to other methods already in use to measure dose rate. Bearing in mind that this method provides the nuclide specific dose it is useful, in radiation protection, since knowing what the relations between different nuclides are and how they change is very important when estimating the risks

Global deforestation and forest degradation rates have a significant impact on the accumulation of greenhouse gases (GHGs) in the atmosphere. The Food and Agriculture Organization (FAO) estimated that during the 1990's 16.1 million hectares per year were affected by deforestation, most of them in the tropics. The Intergovernmental Panel on Climate Change (IPCC) calculated that, for the same period, the contribution of land-use changes to GHG accumulation into the atmosphere was 1.6{+-}0.8 Giga (1G=109) tonnes of carbon per year, a quantity that corresponds to 25% of the total annual global emissions of GHGs. The United Nations Framework Convention on Climate Change (UNFCCC), in recognising climate change as a serious threat, urged counties to take up measures to enhance and conserve ecosystems such as forests that act as reservoirs and sinks of GHGs. The Kyoto Protocol (KP), adopted in 1997, complements the UNFCCC by providing an enforceable agreement with quantitative targets for reducing GHG emissions. For fulfilling their emission-limitation commitments under the KP, industrialized countries (listed in the KP's Annex I) can use land-based activities, such as reducing deforestation, establishing new forests (afforestation and reforestation) and other vegetation types, managing agricultural and forestlands in a way that the 'carbon sink' is maximized. Annex I countries may also claim credit for carbon sequestration in developing countries by afforestation and reforestation (AR) through the Clean Development Mechanism (CDM), one of the 'Kyoto Mechanisms' that allow countries to achieve reductions where it is economically efficient to do so. For the period 2008-2012, forestry activities under the CDM have been restricted to afforestation and reforestation on areas that were not forested in 1990. In addition, CDM projects must lead to emission reductions or net carbon uptake additional to what would have occurred without the CDM funding

The most widely used functional response in describing predator-prey relationships is the Holling type II functional response, where per capita predation is a smooth, increasing, and saturating function of prey density. Beddington and DeAngelis modified the Holling type II response to include interference of predators that increases with predator density. Here we introduce a predator-interference term into a Holling type I functional response. We explain the ecological rationale for the response and note that the phase plane configuration of the predator and prey isoclines differs greatly from that of the Beddington-DeAngelis response; for example, in having three possible interior equilibria rather than one. In fact, this new functional response seems to be quite unique. We used analytical and numerical methods to show that the resulting system shows a much richer dynamical behavior than the Beddington-DeAngelis response, or other typically used functional responses. For example, cyclic-fold, saddle-fold, homoclinic saddle connection, and multiple crossing bifurcations can all occur. We then use a smooth approximation to the Holling type I functional response with predator mutual interference to show that these dynamical properties do not result from the lack of smoothness, but rather from subtle differences in the functional responses. ?? 2011 Springer Science+Business Media, LLC.

The basis of the NDP is the irradiation of a sample with a thermal or cold neutron beam and the subsequent release of charged particles due to neutron-induced exoergic charged particle reactions. Neutrons interact with the nuclei of elements and release mono-energetic charged particles, e.g. alpha particles or protons, and recoil atoms. Depth profile of the analyzed element can be obtained by making a linear transformation of the measured energy spectrum by using the stopping power of the sample material. A few micrometer of the material can be analyzed nondestructively, and on the order of 10nm depth resolution can be obtained depending on the material type with NDP method. In the NDP method, the one first steps of the analytical process is a channel-energy calibration. This calibration is normally made with the experimental measurement of NIST Standard Reference Material sample (SRM-93a). In this study, some Monte Carlo (MC) codes were tried to calculate the Si detector response function when this detector accounted the energy charges particles emitting from an analytical sample. In addition, these MC codes were also tried to calculate the depth distributions of some light elements ({sup 10}B, {sup 3}He, {sup 6}Li, etc.) in SRM-93a and SRM-2137 samples. These calculated profiles were compared with the experimental profiles and SIMS profiles. In this study, some popular MC neutron transport codes are tried and tested to calculate the detector response function in the NDP method. The simulations were modeled based on the real CN-NDP system which is a part of Cold Neutron Activation Station (CONAS) at HANARO (KAERI). The MC simulations are very successful at predicting the alpha peaks in the measured energy spectrum. The net area difference between the measured and predicted alpha peaks are less than 1%. A possible explanation might be bad cross section data set usage in the MC codes for the transport of low energetic lithium atoms inside the silicon substrate.

A general method is presented for the efficient elimination of response parameters in molecular property calculations for variational and nonvariational energies. For variational energies, Wigner's 2n+1 rule is obtained as a special case of the more general k(2n+1) rule, which states that for a subset of k perturbations within a total set of z>or=k perturbations, response parameters may be eliminated according to the 2n+1 rule (normally applied to the full set of perturbations). Nonvariational energies may be treated by introducing Lagrange multipliers that satisfy the stronger 2n+2 rule for the k perturbations, while the wave-function parameters still satisfy the 2n+1 rule for the k perturbations. The corresponding rule for nonvariational energies is referred to as the k(2n+1,2n+2) rule. For k=z, the well-known 2n+2 rule for the multipliers is reproduced, while the wave-function parameters satisfy the 2n+1 rule. The application of the k(2n+1) and k(2n+1,2n+2) rules minimizes the total number of response equations to be solved when the molecular property contains k extensive perturbations (e.g., geometrical derivatives) and z-k intensive perturbations (e.g., electric fields).

In this paper, a formula derived from the wavelet dilation equation is presented as a means to calculate scaling function coefficient values for arbitrary waveforms. The performance of this formula is assessed by analyzing the scaling functions of multiple Daubechies wavelets. With the goal of developing new discrete wavelet families that possess the characteristics of a specific system, the formula is applied to analytical and experimental response data. The relationship between the number of coefficients and their ability to successfully capture the features of the signal is studied. Further, a technique is developed for determining the requisite number of coefficients when applying the formula. This formula may serve as the foundation for the development of new families of discrete wavelets which can be based on the nominal characteristics of a given system for use in signal processing and model discretization applications.

Background: For the nuclear power plant main component under design phase, the anti-seismic capability analysis should be evaluation used response spectrum analysis or time history analysis usually. Purpose: This paper attempts to get the non-linear influence because of gaps. Methods: Based on ANSYS FEM software, get the CRDM seismic result used improve spectrum analysis, which compare with the time history analysis. Results: The bending moments and shear force on each sections of CRDM housing was showed in this paper. Conclusions: The result shows that the improve spectrum analysis can get the structure dynamic characteristics, the calculation results between the improve spectrum analysis and the time history analysis consistent, it can provide guidance on the subsequent equipment design. (authors)

Based on the influence functional formalism, we have derived a nonperturbative equation of motion for a reduced system coupled to a harmonic bath with colored noise in which the system-bath coupling operator does not necessarily commute with the system Hamiltonian. The resultant expression coincides with the time-convolutionless quantum master equation derived from the second-order perturbative approximation, which is also equivalent to a generalized Redfield equation. This agreement occurs because, in the nonperturbative case, the relaxation operators arise from the higher-order system-bath interaction that can be incorporated into the reduced density matrix as the influence operator; while the second-order interaction remains as a relaxation operator in the equation of motion. While the equation describes the exact dynamics of the density matrix beyond weak system-bath interactions, it does not have the capability to calculate nonlinear response functions appropriately. This is because the equation cannot describe memory effects which straddle the external system interactions due to the reduced description of the bath. To illustrate this point, we have calculated the third-order two-dimensional (2D) spectra for a two-level system from the present approach and the hierarchically coupled equations approach that can handle quantal system-bath coherence thanks to its hierarchical formalism. The numerical demonstration clearly indicates the lack of the system-bath correlation in the present formalism as fast dephasing profiles of the 2D spectra.

The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional

The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship

Evaluation of silicon carbide (SiC) semiconductor detectors for use in power monitoring is of significant interest because of their distinct advantages, including small size, small mass, and their inactivity both chemically and neutronically. The main focus of this paper includes evaluating the predicted response of a SiC detector when placed in a 17 x 17 Westinghouse PWR assembly, using the PENTRAN code system for the 3-D deterministic adjoint transport computations. Adjoint transport results indicated maximum adjoint values of 1, 0.507 and 0.308 were obtained for the thermal, epithermal and fast neutron energy groups, respectively. Within a radial distance of 6.08 cm from the SiC detector, local fuel pins contribute 75.33% at this radius within the thermal group response. A total of 35.85% of the response in the epithermal group is accounted for in the same 6.08 cm radius; similarly, 21.58% of the fast group response is accounted for in the same radius. This means that for neutrons, the effective monitoring range of the SiC detectors is on the order of five fuel pins away from the detector; pins outside this range in the fuel lattice are minimally 'seen' by the SiC detector. (authors)

Accuracy of measurement is a cornerstone of research in order to make robust conclusions about the research hypothesis. To examine whether the number of items (questions) and the number of consumption responses (the coding used to measure the frequency of consumption) included in nutritional assessment tools influence their repeatability. During 2009, 400 participants (250 from Greece, mean age 37 +/- 13 years, 34% males, and 150 from Spain, mean age 39 +/- 17 years, 41% males) completed a diet index with 11 items and binary (yes/ no) responses, a diet index with 11 items and 6-scale responses, and 36-item and 76-item food frequency questionnaires (FFQs) with 6-scale responses. The participants completed these tools twice, with 15 days between the two administrations of the tools. The Spearman-Brown coefficient (r(sb)), Kendall's tau coefficients, and the Bland-Altman method were applied to answer the research hypothesis. The highest repeatability coefficient was observed for the diet index with 11 items and binary (yes/no) responses (r(sb) = 0.948, p tools (p > .23), whereas these three tools had significantly higher repeatability coefficients than the 76-item FFQ (p = .002). Subgroup analyses by sex, education, smoking, and clinical status confirmed these results. Repeatability was found for all food frequency assessment tools used, irrespective of the number of items or the number of responsesincluded.

. The absorption spectrum can in this formulation be seen as a matrix function of the characteristic VCC Jacobian response matrix. The asymmetric matrix version of the Lanczos method is used to generate a tridiagonal representation of the VCC response Jacobian. Solving the complex response equations...... in the relevant Lanczos space provides a method for calculating the VCC damped response functions and thereby subsequently the absorption spectra. The convergence behaviour of the algorithm is discussed theoretically and tested for different levels of completeness of the VCC expansion. Comparison is made...... with results from the recently reported [P. Seidler, M. B. Hansen, W. Györffy, D. Toffoli, and O. Christiansen, J. Chem. Phys. 132, 164105 (2010)] vibrational configuration interaction damped response function calculated using a symmetric Lanczos algorithm. Calculations of IR spectra of oxazole, cyclopropene...

We present a theoretical method for calculating small-signal modulation responses and noise spectra of active Fabry-Perot semiconductor waveguides with external light injection. Small-signal responses due to either a modulation of the pump current or due to an optical amplitude or phase modulation...

/French fry mix (n=16) had significant differences (Pfast food type with the largest difference between the two methods. Significance: Recipe calculation is a cost-effective alternative to chemical analysis in dietary assessment and nutrient labeling. But recipe...... and chemical analysis of fast food based on data from http://frida.fooddata.dk. Materials and methods: New fast food data in http://frida.fooddata.dk was based on 135 samples of ready to eat fast foods as burgers and sandwiches collected from fast food outlets, separated into their recipe components which were...... weighed. Typical components were bread, French fries, vegetables, meat, and dressings. The fast foods were analyzed and the content of energy, protein, saturated fat, iron, thiamin, potassium and sodium were compared to recipe calculation. Wilcoxon Signed Rank test, Spearman correlation coefficients...

HBsAg vaccine formulation, Posintro™-HBsAg, was compared to two commercial hepatitis B vaccines including aluminium or monophosphoryl lipid A (MPL) and the two adjuvant systems MF59 and QS21 in their efficiency to prime both cellular and humoral immune responses. The Posintro™-HBsAg induced...... of delayed type hypersensitivity (DTH) reaction and CD4(+) T-cell proliferation. In addition, Posintro™-HBsAg was the only vaccine tested that also induced a strong cytotoxic T lymphocyte (CTL) response, with high levels of antigen specific CD8 T-cells secreting IFN-gamma mediating cytolytic activity...

Background: Accuracy of a measurement is a cornerstone in research in order to make robust conclusions about the research hypothesis. Objective: To examine whether the number of items (questions) and the number of responses of consumption included in nutritional assessment tools influence their repeatability. Methods: During 2009, 400 participants (250 from Greece, 37±13 yrs, 34% males and 150 participants from Spain, 39±17 yrs, 41% males) completed a diet index with 11-items a...

Improvements of gamma-ray transport calculations in S n codes aim at taking into account the bound-electron effect of Compton scattering (incoherent), coherent scattering (Rayleigh), and secondary sources of bremsstrahlung and fluorescence. A computation scheme was developed to take into account these phenomena by modifying the angular and energy transfer matrices, and no modification in the transport code has been made. The incoherent and coherent scatterings as well as the fluorescence sources can be strictly treated by the transfer matrix change. For bremsstrahlung sources, this is possible if one can neglect the charged particles path as they pass through the matter (electrons and positrons) and is applicable for the energy range of interest for us (below 10 MeV). These improvements have been reported on the kernel attenuation codes by the calculation of new buildup factors. The gamma-ray buildup factors have been carried out for 25 natural elements up to 30 mean free paths in the energy range between 15 keV and 10 MeV

To design a versatile, nonhomogeneous insert for the dose verification phantom ArcCHECK(™) (Sun Nuclear Corp., FL) and to demonstrate its usefulness for the verification of dose distributions in inhomogeneous media. As an example, we demonstrate it can be used clinically for routine quality assurance of two volumetric modulated arc therapy (VMAT) systems for lung stereotactic body radiation therapy (SBRT): SmartArc(®) (Pinnacle(3), Philips Radiation Oncology Systems, Fitchburg, WI) and RapidArc(®) (Eclipse(™), Varian Medical Systems, Palo Alto, CA). The cylindrical detector array ArcCHECK(™) has a retractable homogeneous acrylic insert. In this work, we designed and manufactured a customized heterogeneous insert with densities that simulate soft tissue, lung, bone, and air. The insert offers several possible heterogeneity configurations and multiple locations for point dose measurements. SmartArc(®) and RapidArc(®) plans for lung SBRT were generated and copied to ArcCHECK(™) for each inhomogeneity configuration. Dose delivery was done on a Varian 2100 ix linac. The evaluation of dose distributions was based on gamma analysis of the diode measurements and point doses measurements at different positions near the inhomogeneities. The insert was successfully manufactured and tested with different measurements of VMAT plans. Dose distributions measured with the homogeneous insert showed gamma passing rates similar to our clinical results (∼99%) for both treatment-planning systems. Using nonhomogeneous inserts decreased the passing rates by up to 3.6% in the examples studied. Overall, SmartArc(®) plans showed better gamma passing rates for nonhomogeneous measurements. The discrepancy between calculated and measured point doses was increased up to 6.5% for the nonhomogeneous insert depending on the inhomogeneity configuration and measurement location. SmartArc(®) and RapidArc(®) plans had similar plan quality but RapidArc(®) plans had significantly

%). Correlations ranged from 0.49 for iron to 0.75 for energy. Bland-Altman plots showed larger differences for higher contents for thiamin and potassium. Results depended on the type of fast food. For burgers (n=36) there was no significant difference for any of the nutrients between the two methods. Meat....../French fry mix (n=16) had significant differences (Pfast food type with the largest difference between the two methods. Significance: Recipe calculation is a cost-effective alternative to chemical analysis in dietary assessment and nutrient labeling. But recipe...... and chemical analysis of fast food based on data from http://frida.fooddata.dk. Materials and methods: New fast food data in http://frida.fooddata.dk was based on 135 samples of ready to eat fast foods as burgers and sandwiches collected from fast food outlets, separated into their recipe components which were...

The dehydrogenation enthalpies of Ca(AlH4)2, CaAlH5, and CaH2+6LiBH4 have been calculated using density functional theory calculations at the generalized gradient approximation level. Harmonic phonon zero point energy (ZPE) corrections have been included using Parlinski’s direct method. The

The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The responsecalculationincludes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within ∼5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.

The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The responsecalculationincludes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within ˜5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.

The paper presents the seismic response of reactor vessel of pool type LMFBR with fluid-structure interaction. The reactor vessel has bottom support arrangement, the same core support system as Super-Phenix in France. Due to the bottom support arrangement, the level of core support is lower than that of the side support arrangement. So, in this reactor vessel, the displacement of the core top tends to increase because of the core's rocking. In this study, we investigated the vibration and seismic response characteristics of the reactor vessel. Therefore, the seismic experiments were carried out using one-eighth scale model and the seismic responseincluding FSI and sloshing were investigated. From this study, the effect of liquid on the vibration characteristics and the seismic response characteristics of reactor vessel were clarified and sloshing characteristics were also clarified. It was confirmed that FEM analysis with FSI can reproduce the seismic behavior of the reactor vessel and is applicable to seismic design of the pool type LMFBR with bottom support arrangement. (author). 5 refs, 14 figs, 2 tabs

Purpose: To design a versatile, nonhomogeneous insert for the dose verification phantom ArcCHECK{sup Trade-Mark-Sign} (Sun Nuclear Corp., FL) and to demonstrate its usefulness for the verification of dose distributions in inhomogeneous media. As an example, we demonstrate it can be used clinically for routine quality assurance of two volumetric modulated arc therapy (VMAT) systems for lung stereotactic body radiation therapy (SBRT): SmartArc{sup Registered-Sign} (Pinnacle{sup 3}, Philips Radiation Oncology Systems, Fitchburg, WI) and RapidArc{sup Registered-Sign} (Eclipse{sup Trade-Mark-Sign }, Varian Medical Systems, Palo Alto, CA). Methods: The cylindrical detector array ArcCHECK{sup Trade-Mark-Sign} has a retractable homogeneous acrylic insert. In this work, we designed and manufactured a customized heterogeneous insert with densities that simulate soft tissue, lung, bone, and air. The insert offers several possible heterogeneity configurations and multiple locations for point dose measurements. SmartArc{sup Registered-Sign} and RapidArc{sup Registered-Sign} plans for lung SBRT were generated and copied to ArcCHECK{sup Trade-Mark-Sign} for each inhomogeneity configuration. Dose delivery was done on a Varian 2100 ix linac. The evaluation of dose distributions was based on gamma analysis of the diode measurements and point doses measurements at different positions near the inhomogeneities. Results: The insert was successfully manufactured and tested with different measurements of VMAT plans. Dose distributions measured with the homogeneous insert showed gamma passing rates similar to our clinical results ({approx}99%) for both treatment-planning systems. Using nonhomogeneous inserts decreased the passing rates by up to 3.6% in the examples studied. Overall, SmartArc{sup Registered-Sign} plans showed better gamma passing rates for nonhomogeneous measurements. The discrepancy between calculated and measured point doses was increased up to 6.5% for the nonhomogeneous

A realistic geometry model of a Bonner sphere system with a spherical 3 He-filled proportional counter and 12 polyethylene moderating spheres with diameters ranging from 7,62 cm (3'') to 45,72 cm (18'') is introduced. The MCNP Monte Carlo computer code is used to calculate the responses of this Bonner sphere system to monoenergetic neutrons in the energy range between 1 meV to 20 MeV. The relative uncertainties of the responses due to the Monte Carlo calculations are less than 1% for spheres up to 30,48 cm (12'') in diameter and less than 2% for the 15'' and 18'' spheres. Resonances in the carbon cross section are seen as significant structures in the response functions. Additional calculations were made to study the influence of the 3 He number density and the polyethylene mass density on the response as well as the angular dependence of the Bonner sphere system. The calculatedresponses can be adjusted to a large set of calibration measurements with only a single fit factor common to all sphere diameters and energies. (orig.) [de

Full Text Available Abstract Background The development of complex responses to hypoxia has played a key role in the evolution of mammals, as inadequate response to this condition is frequently associated with cardiovascular diseases, developmental disorders, and cancers. Though numerous studies have used mice and rats in order to explore mechanisms that contribute to hypoxia tolerance, these studies are limited due to the high sensitivity of most rodents to severe hypoxia. The blind subterranean mole rat Spalax is a hypoxia tolerant rodent, which exhibits unique longevity and therefore has invaluable potential in hypoxia and cancer research. Results Using microarrays, transcript abundance was measured in brain and muscle tissues from Spalax and rat individuals exposed to acute and chronic hypoxia for varying durations. We found that Spalax global gene expression response to hypoxia differs from that of rat and is characterized by the activation of functional groups of genes that have not been strongly associated with the response to hypoxia in hypoxia sensitive mammals. Using functional enrichment analysis of Spalax hypoxia induced genes we found highly significant overrepresentation of groups of genes involved in anti apoptosis, cancer, embryonic/sexual development, epidermal growth factor receptor binding, coordinated suppression and activation of distinct groups of transcription factors and membrane receptors, in addition to angiogenic related processes. We also detected hypoxia induced increases of different critical Spalax hub gene transcripts, including antiangiogenic genes associated with cancer tolerance in Down syndrome human individuals. Conclusions This is the most comprehensive study of Spalax large scale gene expression response to hypoxia to date, and the first to use custom Spalax microarrays. Our work presents novel patterns that may underlie mechanisms with critical importance to the evolution of hypoxia tolerance, with special relevance to

In survey research, it is often problematic to ask people sensitive questions because they may refuse to answer or they may provide a socially desirable answer that does not reveal their true status on the sensitive question. To solve this problem Warner (1965) proposed randomized response (RR).

The fifth order, two-dimensional Raman response in liquid xenon is calculated via a time correlation function (TCF) theory and the numerically exact finite field method. Both employ classical molecular dynamics simulations. The results are shown to be in excellent agreement, suggesting the efficacy of the TCF approach, in which the response function is written approximately in terms of a single classical multitime TCF.

A feed forward three-layer artificial neural network (ANN) model was developed for VLE prediction of ternary systems including ionic liquid (IL) (water+ethanol+1-butyl-3- methyl-imidazolium acetate), in a relatively wide range of IL mass fractions up to 0.8, with the mole fractions of ethanol on IL-free basis fixed separately at 0.1, 0.2, 0.4, 0.6, 0.8, and 0.98. The output results of the ANN were the mole fraction of ethanol in vapor phase and the equilibrium temperature. The validity of the model was evaluated through a test data set, which were not employed in the training case of the network. The performance of the ANN model for estimating the mole fraction and temperature in the ternary system including IL was compared with the non-random-two-liquid (NRTL) and electrolyte non-random-two- liquid (eNRTL) models. The results of this comparison show that the ANN model has a superior performance in predicting the VLE of ternary systems including ionic liquid.

A feed forward three-layer artificial neural network (ANN) model was developed for VLE prediction of ternary systems including ionic liquid (IL) (water+ethanol+1-butyl-3- methyl-imidazolium acetate), in a relatively wide range of IL mass fractions up to 0.8, with the mole fractions of ethanol on IL-free basis fixed separately at 0.1, 0.2, 0.4, 0.6, 0.8, and 0.98. The output results of the ANN were the mole fraction of ethanol in vapor phase and the equilibrium temperature. The validity of the model was evaluated through a test data set, which were not employed in the training case of the network. The performance of the ANN model for estimating the mole fraction and temperature in the ternary system including IL was compared with the non-random-two-liquid (NRTL) and electrolyte non-random-two- liquid (eNRTL) models. The results of this comparison show that the ANN model has a superior performance in predicting the VLE of ternary systems including ionic liquid

The specific innate modular theory of jealousy hypothesizes that natural selection shaped sexual jealousy as a mechanism to prevent cuckoldry, and emotional jealousy as a mechanism to prevent resource loss. Therefore, men should be primarily jealous over a mate's sexual infidelity and women over a mate's emotional infidelity. Five lines of evidence have been offered as support: self-report responses, psychophysiological data, domestic violence (including spousal abuse and homicide), and morbid jealousy cases. This article reviews each line of evidence and finds only one hypothetical measure consistent with the hypothesis. This, however, is contradicted by a variety of other measures (including reported reactions to real infidelity). A meta-analysis of jealousy-inspired homicides, taking into account base rates for murder, found no evidence that jealousy disproportionately motivates men to kill. The findings are discussed from a social-cognitive theoretical perspective.

Schwartz et al. (2010) recently reported that the total gross energy-generating offshore wind resource in the United States in waters less than 30m deep is approximately 1000 GW. Estimated offshore generating capacity is thus equivalent to the current generating capacity in the United States. Offshore wind power can therefore play important role in electricity production in the United States. However, most of this resource is located along the East Coast of the United States and in the Gulf of Mexico, areas frequently affected by tropical cyclones including hurricanes. Hurricane strength winds, associated shear and turbulence can affect performance and structural integrity of wind turbines. In a recent study Rose et al. (2012) attempted to estimate the risk to offshore wind turbines from hurricane strength winds over a lifetime of a wind farm (i.e. 20 years). According to Rose et al. turbine tower buckling has been observed in typhoons. They concluded that there is "substantial risk that Category 3 and higher hurricanes can destroy half or more of the turbines at some locations." More robust designs including appropriate controls can mitigate the risk of wind turbine damage. To develop such designs good estimates of turbine loads under hurricane strength winds are essential. We use output from a large-eddy simulation of a hurricane to estimate shear and turbulence intensity over first couple of hundred meters above sea surface. We compute power spectra of three velocity components at several distances from the eye of the hurricane. Based on these spectra analytical spectral forms are developed and included in TurbSim, a stochastic inflow turbulence code developed by the National Renewable Energy Laboratory (NREL, http://wind.nrel.gov/designcodes/preprocessors/turbsim/). TurbSim provides a numerical simulation including bursts of coherent turbulence associated with organized turbulent structures. It can generate realistic flow conditions that an operating turbine

To explore the concept of corporate social responsibility (CSR) within the UK National Health Service (NHS) and to examine how it may be developed to positively influence the psyche, behaviour and performance of NHS managers. Primary research based upon semi-structured individual face to face interviews with 20 NHS managers. Theoretical frameworks and concepts relating to organisational culture and CSR are drawn upon to discuss the findings. The NHS managers see themselves as being driven by altruistic core values. However, they feel that the public does not believe that they share the altruistic NHS value system. The study is based on a relatively small sample of NHS managers working exclusively in London and may not necessarily represent the views of managers either London-wide or nation-wide. It is suggested that an explicit recognition by the NHS of the socially responsible commitment of its managers within its CSR strategy would help challenge the existing negative public image of NHS managers and in turn improve the managers' self esteem and morale. This paper addresses the relative lacunae in research relating to public sector organisations (such as the NHS) explicitly including the role and commitment of its staff within the way it publicises its CSR strategy. This paper would be of interest to a wide readership including public sector and NHS policy formulators, NHS practitioners, academics and students.

The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H 2 , H 2 O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems

The fifth order, two-dimensional Raman response in liquid xenon is calculated via a time correlation function (TCF) theory and the numerically exact finite field method. Both employ classical molecular dynamics simulations. The results are shown to be in excellent agreement, suggesting the efficacy

Full Text Available Background & Aims: Environmental enteric dysfunction (EED, a chronic diffuse inflammation of the small intestine, is associated with stunting in children in the developing world. The pathobiology of EED is poorly understood because of the lack of a method to elucidate the host response. This study tested a novel microarray method to overcome limitation of RNA sequencing to interrogate the host transcriptome in feces in Malawian children with EED. Methods: In 259 children, EED was measured by lactulose permeability (%L. After isolating low copy numbers of host messenger RNA, the transcriptome was reliably and reproducibly profiled, validated by polymerase chain reaction. Messenger RNA copy number then was correlated with %L and differential expression in EED. The transcripts identified were mapped to biological pathways and processes. The children studied had a range of %L values, consistent with a spectrum of EED from none to severe. Results: We identified 12 transcripts associated with the severity of EED, including chemokines that stimulate T-cell proliferation, Fc fragments of multiple immunoglobulin families, interferon-induced proteins, activators of neutrophils and B cells, and mediators that dampen cellular responses to hormones. EED-associated transcripts mapped to pathways related to cell adhesion, and responses to a broad spectrum of viral, bacterial, and parasitic microbes. Several mucins, regulatory factors, and protein kinases associated with the maintenance of the mucous layer were expressed less in children with EED than in normal children. Conclusions: EED represents the activation of diverse elements of the immune system and is associated with widespread intestinal barrier disruption. Differentially expressed transcripts, appropriately enumerated, should be explored as potential biomarkers. Keywords: Environmental Enteropathy, Fecal Transcriptome, Stunting, Intestinal Inflammation

CR-39 detectors are widely used for Radon and progeny measurement in the air. In this paper, using the Monte Carlo simulation, the possibility of using the CR-39 for direct measurement of Radon and progeny in water is investigated. Assuming the random position and angle of alpha particle emitted by Radon and progeny, alpha energy and angular spectrum that arrive at CR-39, the calibration factor, and the suitable depth of chemical etching of CR-39 in air and water was calculated. In this simulation, a range of data were obtained from SRIM2008 software. Calibration factor of CR-39 in water is calculated as 6.6 (kBq.d/m 3 )/(track/cm 2 ) that is corresponding with EPA standard level of Radon concentration in water (10-11 kBq/m 3 ). With replacing the skin instead of CR-39, the volume affected by Radon and progeny was determined to be 2.51 mm 3 for one m 2 of skin area. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/(Bq.h/m 3 ). Using the CR-39 for Radon measurement in water can be beneficial. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/ (Bq.h/m 3 ).

Full Text Available A nonlinear three dimensional (3D single rack model and a nonlinear 3D whole pool multi-rack model are developed for the spent fuel storage racks of a nuclear power plant (NPP to determine impacts and frictional motion responses when subjected to 3D excitations from the supporting building floor. The submerged free standing rack system and surrounding water are coupled due to hydrodynamic fluid-structure interaction (FSI using potential theory. The models developed have features that allow consideration of geometric and material nonlinearities including (1 the impacts of fuel assemblies to rack cells, a rack to adjacent racks or pool walls, and rack support legs to the pool floor; (2 the hydrodynamic coupling of fuel assemblies with their storing racks, and of a rack with adjacent racks, pool walls, and the pool floor; and (3 the dynamic motion behavior of rocking, twisting, and frictional sliding of rack modules. Using these models 3D nonlinear time history dynamic analyses are performed per the U.S. Nuclear Regulatory Commission (USNRC criteria. Since few such modeling, analyses, and results using both the 3D single and whole pool multiple rack models are available in the literature, this paper emphasizes description of modeling and analysis techniques using the SOLVIA general purpose nonlinear finite element code. Typical response results with different Coulomb friction coefficients are presented and discussed.

Full Text Available Abstract Background Mechanisms that confer an ability to respond positively to environmental osmolarity are fundamental to ensuring embryo survival during the preimplantation period. Activation of p38 mitogen-activated protein kinase (MAPK occurs following exposure to hyperosmotic treatment. Recently, a novel scaffolding protein called Osmosensing Scaffold for MEKK3 (OSM was linked to p38 MAPK activation in response to sorbitol-induced hypertonicity. The human ortholog of OSM is cerebral cavernous malformation 2 (CCM2. The present study was conducted to investigate whether CCM2 is expressed during mouse preimplantation development and to determine whether this scaffolding protein is associated with p38 MAPK activation following exposure of preimplantation embryos to hyperosmotic environments. Results Our results indicate that Ccm2 along with upstream p38 MAPK pathway constituents (Map3k3, Map2k3, Map2k6, and Map2k4 are expressed throughout mouse preimplantation development. CCM2, MAP3K3 and the phosphorylated forms of MAP2K3/MAP2K6 and MAP2K4 were also detected throughout preimplantation development. Embryo culture in hyperosmotic media increased p38 MAPK activity in conjunction with elevated CCM2 levels. Conclusion These results define the expression of upstream activators of p38 MAPK during preimplantation development and indicate that embryo responses to hyperosmotic environments include elevation of CCM2 and activation of p38 MAPK.

A meta-analysis was conducted to quantitatively evaluate the correlation between night shift work and the risk of colorectal cancer. We searched for publications up to March 2015 using PubMed, Web of Science, Cochrane Library, EMBASE and the Chinese National Knowledge Infrastructure databases, and the references of the retrieved articles and relevant reviews were also checked. OR and 95% CI were used to assess the degree of the correlation between night shift work and risk of colorectal cancer via fixed- or random-effect models. A dose-response meta-analysis was performed as well. The pooled OR estimates of the included studies illustrated that night shift work was correlated with an increased risk of colorectal cancer (OR = 1.318, 95% CI 1.121-1.551). No evidence of publication bias was detected. In the dose-response analysis, the rate of colorectal cancer increased by 11% for every 5 years increased in night shift work (OR = 1.11, 95% CI 1.03-1.20). In conclusion, this meta-analysis indicated that night shift work was associated with an increased risk of colorectal cancer. Further researches should be conducted to confirm our findings and clarify the potential biological mechanisms.

Inhalation of particulate matter in the ambient air has been shown to cause pulmonary morbidity and exacerbate asthma. Alveolar macrophage (AM) are essential for effective removal of inhaled particles and microbes in the lower airways. While some particles minimally effect AM function others inhibit antimicrobial activity or cause cytokine and growth factor production leading to inflammation and tissue remodeling. This study has investigated the effects of water soluble (s) and insoluble (is) components of Chapel Hill, North Carolina ambient particulate matter in the size ranges 0.1-2.5 microm (PM2.5) and 2.5-10 microm (PM10) diameter, on human AM IL-6, TNFalpha, and MCP-1 cytokine production and host defense mechanisms including phagocytosis and oxidant production. Cytokines were found to be induced by isPM10 to a much higher extent (>50-fold) than sPM10, which in turn stimulated production better than isPM2.5, while sPM2.5 was inactive. Previous studies have indicated that endotoxin (ETOX) is a component of sPM10 responsible for cytokine production. Here, it is shown that inhibition of isPM10-induced cytokine production was partially achieved with polymyxin B and LPS-binding protein (LBP), but not with a metal chelator, implicating ETOX as a cytokine-inducing moiety also in isPM10. In addition to inducing cytokines, exposure to isPM10, but not the other PM fractions, also inhibited phagocytosis and oxidant generation in response to yeast. This inhibition was ETOX independent. The decrease in host defenses may be the result of apoptosis in the AM population, which was also found to be specifically caused by isPM10. These results show that the functional capacity of AM is selectively modulated by insoluble components of coarse PM, including the biocontaminant ETOX.

The design and theoretical basis of a new database tool that quickly generates vibroacoustic response estimates using a library of transfer functions (TFs) is discussed. During the early stages of a launch vehicle development program, these response estimates can be used to provide vibration environment specification to hardware vendors. The tool accesses TFs from a database, combines the TFs, and multiplies these by input excitations to estimate vibration responses. The database is populated with two sets of uncoupled TFs; the first set representing vibration response of a bare panel, designated as H(sup s), and the second set representing the response of the free-free component equipment by itself, designated as H(sup c). For a particular configuration undergoing analysis, the appropriate H(sup s) and H(sup c) are selected and coupled to generate an integrated TF, designated as H(sup s +c). This integrated TF is then used with the appropriate input excitations to estimate vibration responses. This simple yet powerful tool enables a user to estimate vibration responses without directly using finite element models, so long as suitable H(sup s) and H(sup c) sets are defined in the database libraries. The paper discusses the preparation of the database tool and provides the assumptions and methodologies necessary to combine H(sup s) and H(sup c) sets into an integrated H(sup s + c). An experimental validation of the approach is also presented.

The systemic inflammatory response syndrome (SIRS) has been advocated as a significant predictor of outcome in trauma. Recent trauma literature has proposed SIRS as a surrogate for physiological derangements characteristic of polytrauma with some authors recommending its inclusion into the definition of polytrauma. The practicality of daily SIRS collection outside of specifically designed prospective trials is unknown. The purpose of this study was to assess the availability of SIRS variables and its appropriateness for inclusion into a definition of polytrauma. We hypothesised SIRS variables would be readily available and easy to collect, thus represent an appropriate inclusion into the definition of polytrauma. A prospective observational study of all trauma team activation patients over 7-months (August 2009 to February 2010) at a University affiliated level-1 urban trauma centre. SIRS data (temperature>38°C or 90 bpm; RR>20/min or a PaCO(2)12.0×10(9)L(-1), or 10 immature bands) collected from presentation, at 24 h intervals until 72 h post injury. Inclusion criteria were all patients generating a trauma team activation response age >16. 336 patients met inclusion criteria. In 46% (155/336) serial SIRS scores could not be calculated due to missing data. Lowest rates of missing data observed on admission [3% (11/336)]. Stratified by ISS>15 (132/336), in 7% (9/132) serial SIRS scores could not be calculated due to missing data. In 123 patients ISS>15 with complete data, 81% (100/123) developed SIRS. For Abbreviated Injury Scale (AIS)>2 in at least 2 body regions (64/336) in 5% (3/64) serial SIRS scores could not be calculated, with 92% (56/61) of patients with complete data developing SIRS. For Direct ICU admissions [25% (85/336)] 5% (4/85) of patients could not have serial SIRS calculated [mean ISS 15(±11)] and 90% (73/81) developed SIRS at least once over 72 h. Based on the experience of our level-1 trauma centre, the practicability of including SIRS into the

In order to plan radiation damage experiments in fission reactors keyed toward fusion reactor applications, it is necessary to have available for these facilities displacement per atom (dpa) and gas production rates for many potential materials. This report supplies such data for the elemental constituents of alloys of interest to the United States fusion reactor alloy development program. The calculations are presented for positions of interest in the HFIR, ORR, and EBR-II reactors. DPA and gas production rates in alloys of interest can be synthesized from these results

We derive equations for the effective concentration giving 10% inhibition (EC10) with 95% confidence limits for probit (log-normal), Weibull, and logistic dose -responsemodels on the basis of experimentally derived median effective concentrations (EC50s) and the curve slope at the central point (50......% inhibition). For illustration, data from closed, freshwater algal assays are analyzed using the green alga Pseudokirchneriella subcapitata with growth rate as the response parameter. Dose-response regressions for four test chemicals (tetraethylammonium bromide, musculamine, benzonitrile, and 4...

, and certain limitations of the theory are considered. Examples of the use of the program are given for60Co γ-ray irradiation of a LiF dosimeter held in aluminum and for evaluation of the influence of changes in broad γ-ray spectra on the response of several dosimeters. The BASIC program and typical data plots...

The objective of this calculation is to evaluate the structural response of the waste package during the horizontal and vertical lifting operations in order to support the waste package lifting feature design. The scope of this calculationincludes the evaluation of the 21 PWR UCF (pressurized water reactor uncanistered fuel) waste package, naval waste package, 5 DHLW/DOE SNF (defense high-level waste/Department of Energy spent nuclear fuel)--short waste package, and 44 BWR (boiling water reactor) UCF waste package. Procedure AP-3.12Q, Revision 0, ICN 0, calculations, is used to develop and document this calculation

An approach combining finite element with boundary element methods is proposed to calculate the elastic vibration and acoustic field radiated from an underwater structure. The FEM software NASTRAN is employed for computation of the structural vibration. An uncoupled boundary element method, based on the potential decomposition technique, is described to determine the acoustic added mass and damping coefficients that result due to fluid loading effects. The acoustic matrices of added mass and damping coefficients are then added to the structural mass and damping matrices, respectively, by the DMAP modules of NASTRAN. Numerical results are shown to be in good agreement with experimental data. The complex eigenvalue analyses of underwater structure are obtained by NASTRAN solution sequence SOL107. Results obtained from this study suggest that the natural frequencies of underwater structures are only weakly dependent on the acoustic frequency if the acoustic wavelength is roughly twice as large as the maximum structural dimension.

Four different numerical algorithms suitable for a linear scaling implementation of time-dependent Hartree-Fock and Kohn-Sham self-consistent field theories are examined. We compare the performance of modified Lanczos, Arooldi, Davidson, and Rayleigh quotient iterative procedures to solve the random-phase approximation (RPA) (non-Hermitian) and Tamm-Dancoff approximation (TDA) (Hermitian) eigenvalue equations in the molecular orbital-free framework. Semiempirical Hamiltonian models are used to numerically benchmark algorithms for the computation of excited states of realistic molecular systems (conjugated polymers and carbon nanotubes). Convergence behavior and stability are tested with respect to a numerical noise imposed to simulate linear scaling conditions. The results single out the most suitable procedures for linear scaling large-scale time-dependent perturbation theory calculations of electronic excitations.

Full Text Available One of the greatest difficulties traditionally found in stainless steel constructions has been the execution of welding parts in them. At the present time, the available technology allows us to use arc welding processes for that application without any disadvantage. Response surface methodology is used to optimise a process in which the variables that take part in it are not related to each other by a mathematical law. Therefore, an empiric model must be formulated. With this methodology the optimisation of one selected variable may be done. In this work, the cooling time that takes place from 800 to 500ºC, t8/5, after TIG welding operation, is modelled by the response surface method. The arc power, the welding velocity and the thermal efficiency factor are considered as the variables that have influence on the t8/5 value. Different cooling times,t8/5, for different combinations of values for the variables are previously determined by a numerical method. The input values for the variables have been experimentally established. The results indicate that response surface methodology may be considered as a valid technique for these purposes.

Lift engineers and lift companies which are involved in the design process of new products or in the research and development of improved components demand a predictive tool of the lift slender system response before testing expensive prototypes. A method for solving the movement of any specified lift system by means of a computer program is presented. The mechanical response of the lift operating in a user defined installation and configuration, for a given excitation and other configuration parameters of real electric motors and its control system, is derived. A mechanical model with 6 degrees of freedom is used. The governing equations are integrated step by step through the Meden-Kutta algorithm in the MATLAB platform. Input data consists on the set point speed for a standard trip and the control parameters of a number of controllers and lift drive machines. The computer program computes and plots very accurately the vertical displacement, velocity, instantaneous acceleration and jerk time histories of the car, counterweight, frame, passengers/loads and lift drive in a standard trip between any two floors of the desired installation. The resulting torque, rope tension and deviation of the velocity plot with respect to the setpoint speed are shown. The software design is implemented in a demo release of the computer program called ElevaCAD. Further on, the program offers the possibility to select the configuration of the lift system and the performance parameters of each component. In addition to the overall system response, detailed information of transients, vibrations of the lift components, ride quality levels, modal analysis and frequency spectrum (FFT) are plotted.

In this paper, we show the outline of the NRESPG and some typical results of the response functions and efficiencies of several kinds of gas counters. The cross section data for the several kinds of filled gases and the wall material of stainless steel or aluminum are taken mainly from ENDF/B-IV. The ENDF/B-V for stainless steel is also used to investigate the influence on pulse height spectra of gas counters due to the difference of nuclear data files. (J.P.N.)

We present an algorithm for fast and accurate computation of the local dose distribution in MeV beams of protons, carbon ions or other heavy charged particles. It uses compound Poisson modeling of track interaction and successive convolutions for fast computation. It can handle arbitrary complex...... mixed particle fields over a wide range of fluences. Since the local dose distribution is the essential part of several approaches to model detector efficiency and cellular response it has potential use in ion-beam dosimetry, radiotherapy, and radiobiology....

Current predictions on species responses to climate change strongly rely on projecting altered environmental conditions on species distributions. However, it is increasingly acknowledged that climate change also influences species interactions. We review and synthesize literature information on

The objective of this effort was to evaluate the piping analysis part of the SMACS code for estimating the response of realistic piping systems subjected to seismic excitation, given as multiple, independent support acceleration histories. The experimental data from the seismic testing of an in-plant piping system at the HDR Test Facility in Germany were used for this purpose. Of the six different support systems tested, two were selected for the evaluation: one a ''stiff'' configuration containing both struts and snubbers, and the other, a more flexible configuration with no snubbers. Described are the analytical modeling, calculations, and results of the posttest simulation of two sets each for both support configurations, with excitations at 100% and 200/300% of safe-shutdown-earthquake loading. Almost all the calculated peak response quantities were smaller (by different amounts) than the corresponding test measurements. However, pipe displacements and bending stresses were better estimated than the pipe accelerations and support forces. The discrepancies are mainly attributable to the inability of the linear analysis to model the nonlinear behavior of the VKL piping system, characterized by gaps in support connections and the friction at pipe clamps. A similar trend of underestimating test responses was observed in the linear analyses performed by other investigators. 13 refs., 14 figs., 8 tabs

A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs

Full Text Available This paper presents analyses of the seismic responses of two reinforced concrete buildings monitored for a period of more than two years. One of the structures was a three-storey reinforced concrete (RC frame building with a shear core, while the other was a three-storey RC frame building without a core. Both buildings are part of the same large complex but are seismically separated from the rest of it. Statistical analysis of the relationships between maximum free field accelerations and responses at different points on the buildings was conducted and demonstrated strong correlation between those. System identification studies using recorded accelerations were undertaken and revealed that natural frequencies and damping ratios of the building structures vary during different earthquake excitations. This variation was statistically examined and relationships between identified natural frequencies and damping ratios, and the peak response acceleration at the roof level were developed. A general trend of decreasing modal frequencies and increasing damping ratios was observed with increased level of shaking and response. Moreover, the influence of soil structure interaction (SSI on the modal characteristics was evaluated. SSI effects decreased the modal frequencies and increased some of the damping ratios.

We report on the response of a prototype CMS hadron calorimeter module to charged particle beams of pions, muons, and electrons with momenta up to 375 GeV/c. The data were taken at the H2 and H4 beamlines at CERN in 1995 and 1996. The prototype sampling calorimeter used copper absorber plates and scintillator tiles with wavelength shifting fibers for readout. The effects of a magnetic field of up to 3 Tesla on the response of the calorimeter to muons, electrons, and pions are presented, and the effects of an upstream lead tungstate crystal electromagnetic calorimeter on the linearity and energy resolution of the combined calorimetric system to hadrons are evaluated. The results are compared with Monte Carlo simulations and are used to optimize the choice of total absorber depth, sampling frequency, and longitudinal readout segmentation.

A method is presented for simulating arrays of spatially varying ground motions, incorporating the effects of incoherence, wave passage, and differential site response. Non‐stationarity is accounted for by considering the motions as consisting of stationary segments. Two approaches are developed...... of multiply‐supported structures. In the second approach, simulated motions are conditioned on the segmented record itself and exhibit increasing variance with distance from the site of the observation. For both approaches, example simulated motions are presented for an existing bridge model employing two...... alternatives for modeling the local soil response: i) idealizing each soil‐column as a single‐degree‐of‐freedom oscillator, and ii) employing the theory of vertical wave propagation in a single soil layer over bedrock. The selection of parameters in the simulation procedure and their effects...

Changes in gene expression, by application of H2O2, O2.- generating agents (methyl viologen, digitonin) and gamma irradiation to tomato suspension cultures, were investigated and compared to the well-described heat shock response. Two-dimensional gel protein mapping analyses gave the first indication that at least small heat shock proteins (smHSP) accumulated in response to application of H2O2 and gamma irradiation, but not to O2.- generating agents. While some proteins seemed to be induced specifically by each treatment, only part of the heat shock response was observed. On the basis of Northern hybridization experiments performed with four heterologous cDNA, corresponding to classes I-IV of pea smHSP, it could be concluded that significant amounts of class I and II smHSP mRNA are induced by H2O2 and by irradiation. Taken together, these results demonstrate that in plants some HSP genes are inducible by oxidative stresses, as in micro-organisms and other eukaryotic cells. HSP22, the main stress protein that accumulates following H2O2 action or gamma irradiation, was also purified. Sequence homology of amino terminal and internal sequences, and immunoreactivity with Chenopodium rubrum mitochondrial smHSP antibody, indicated that the protein belongs to the recently discovered class of plant mitochondrial smHSP. Heat shock or a mild H2O2 pretreatment was also shown to lead to plant cell protection against oxidative injury. Therefore, the synthesis of these stress proteins can be considered as an adaptive mechanism in which mitochondrial protection could be essential

Current predictions on species responses to climate change strongly rely on projecting altered environmental conditions on species distributions. However, it is increasingly acknowledged that climate change also influences species interactions. We review and synthesize literature information on biotic interactions and use it to argue that the abundance of species and the direction of selection during climate change vary depending on how their trophic interactions become disrupted. Plant abundance can be controlled by aboveground and belowground multitrophic level interactions with herbivores, pathogens, symbionts and their enemies. We discuss how these interactions may alter during climate change and the resulting species range shifts. We suggest conceptual analogies between species responses to climate warming and exotic species introduced in new ranges. There are also important differences: the herbivores, pathogens and mutualistic symbionts of range-expanding species and their enemies may co-migrate, and the continuous gene flow under climate warming can make adaptation in the expansion zone of range expanders different from that of cross-continental exotic species. We conclude that under climate change, results of altered species interactions may vary, ranging from species becoming rare to disproportionately abundant. Taking these possibilities into account will provide a new perspective on predicting species distribution under climate change.

The static response of coated microbubbles is investigated with a novel approach employed for modeling contact between a microbubble and the cantilever of an atomic force microscope. Elastic tensions and moments are described via appropriate constitutive laws. The encapsulated gas is assumed to undergo isothermal variations. Due to the hydrophilic nature of the cantilever, an ultrathin aqueous film is formed, which transfers the force onto the shell. An interaction potential describes the local pressure applied on the shell. The problem is solved in axisymmetric form with the finite element method. The response is governed by the dimensionless bending, k^ b=kb/(χ R02 ), pressure, P^ A=(PAR0 )/χ , and interaction potential, W ^ =w0/χ . Hard polymeric shells have negligible resistance to gas compression, while for the softer lipid shells gas compressibility is comparable with shell elasticity. As the external force increases, numerical simulations reveal that the force versus deformation (f vs d) curve of polymeric shells exhibits a transition from the linear O(d) (Reissner) regime, marked by flattened shapes around the contact region, to a non-linear O(d1/2) (Pogorelov) regime dominated by shapes exhibiting crater formation due to buckling. When lipid shells are tested, buckling is bypassed as the external force increases and flattened shapes prevail in an initially linear f vs d curve. Transition to a curved upwards regime is observed as the force increases, where gas compression and area dilatation form the dominant balance providing a nonlinear regime with an O(d3) dependence. Asymptotic analysis recovers the above patterns and facilitates estimation of the shell mechanical properties.

The relativistic calculation of nuclear magnetic shielding tensors in hydrogen halides is performed using the second-order regular approximation to the normalized elimination of the small component (SORA-NESC) method with the inclusion of the perturbation terms from the metric operator. This computational scheme is denoted as SORA-Met. The SORA-Met calculation yields anisotropies, Delta sigma = sigma(parallel) - sigma(perpendicular), for the halogen nuclei in hydrogen halides that are too small. In the NESC theory, the small component of the spinor is combined to the large component via the operator sigma x piU/2c, in which pi = p + A, U is a nonunitary transformation operator, and c approximately = 137.036 a.u. is the velocity of light. The operator U depends on the vector potential A (i.e., the magnetic perturbations in the system) with the leading order c(-2) and the magnetic perturbation terms of U contribute to the Hamiltonian and metric operators of the system in the leading order c(-4). It is shown that the small Delta sigma for halogen nuclei found in our previous studies is related to the neglect of the U(0,1) perturbation operator of U, which is independent of the external magnetic field and of the first order with respect to the nuclear magnetic dipole moment. Introduction of gauge-including atomic orbitals and a finite-size nuclear model is also discussed.

An atomic orbital density matrix based response formulation of the nuclei-selected approach of Beer, Kussmann, and Ochsenfeld [J. Chem. Phys. 134, 074102 (2011)] to calculate nuclear magnetic resonance (NMR) shielding tensors has been developed and implemented into LSDalton allowing for a simultaneous solution of the response equations, which significantly improves the performance. The response formulation to calculate nuclei-selected NMR shielding tensors can be used together with the density-fitting approximation that allows efficient calculation of Coulomb integrals. It is shown that using density-fitting does not lead to a significant loss in accuracy for both the nuclei-selected and the conventional ways to calculate NMR shielding constants and should thus be used for applications with LSDalton.

Full Text Available Dietary starch is required for a dry, extruded kibble; the most common diet type for domesticated felines in North America. However, the amount and source of dietary starch may affect digestibility and metabolism of other macronutrients. The objectives of this study were to evaluate the effects of 3 commercial cat diets on in vivo and in vitro energy and macronutrient digestibility, and to analyze the accuracy of the modified Atwater equation. Dietary treatments differed in their perceived glycemic response (PGR based on ingredient composition and carbohydrate content (34.1, 29.5, and 23.6% nitrogen-free extract for High, Medium, and LowPGR, respectively. A replicated 3 × 3 Latin square design was used, with 3 diets and 3 periods. In vivo apparent protein, fat, and organic matter digestibility differed among diets, while apparent dry matter digestibility did not. Cats were able to efficiently digest and absorb macronutrients from all diets. Furthermore, the modified Atwater equation underestimated measured metabolizable energy by approximately 12%. Thus, the modified Atwater equation does not accurately determine the metabolizable energy of high quality feline diets. Further research should focus on understanding carbohydrate metabolism in cats, and establishing an equation that accurately predicts the metabolizable energy of feline diets.

Dietary starch is required for a dry, extruded kibble; the most common diet type for domesticated felines in North America. However, the amount and source of dietary starch may affect digestibility and metabolism of other macronutrients. The objectives of this study were to evaluate the effects of 3 commercial cat diets on in vivo and in vitro energy and macronutrient digestibility, and to analyze the accuracy of the modified Atwater equation. Dietary treatments differed in their perceived glycemic response (PGR) based on ingredient composition and carbohydrate content (34.1, 29.5, and 23.6% nitrogen-free extract for High, Medium, and LowPGR, respectively). A replicated 3 × 3 Latin square design was used, with 3 diets and 3 periods. In vivo apparent protein, fat, and organic matter digestibility differed among diets, while apparent dry matter digestibility did not. Cats were able to efficiently digest and absorb macronutrients from all diets. Furthermore, the modified Atwater equation underestimated measured metabolizable energy by approximately 12%. Thus, the modified Atwater equation does not accurately determine the metabolizable energy of high quality feline diets. Further research should focus on understanding carbohydrate metabolism in cats, and establishing an equation that accurately predicts the metabolizable energy of feline diets.

Recently, we have demonstrated that the problems finding a suitable adiabatic approximation in time-dependent one-body reduced density matrix functional theory can be remedied by introducing an additional degree of freedom to describe the system: the phase of the natural orbitals [K. J. H. Giesbertz, O. V. Gritsenko, and E. J. Baerends, Phys. Rev. Lett. 105, 013002 (2010); K. J. H. Giesbertz, O. V. Gritsenko, and E. J. Baerends, J. Chem. Phys. 133, 174119 (2010)]. In this article we will show in detail how the frequency-dependent response equations give the proper static limit (ω → 0), including the perturbation in the chemical potential, which is required in static response theory to ensure the correct number of particles. Additionally we show results for the polarizability for H2 and compare the performance of two different two-electron functionals: the phase-including Löwdin-Shull functional and the density matrix form of the Löwdin-Shull functional.

PURPOSE: Central venous pressure (CVP) has been shown to have poor predictive value for fluid responsiveness in critically ill patients. We aimed to re-evaluate this in a larger sample subgrouped by baseline CVP values. METHODS: In April 2015, we systematically searched and included all clinical...... studies evaluating the value of CVP in predicting fluid responsiveness. We contacted investigators for patient data sets. We subgrouped data as lower (12 mmHg) baseline CVP. RESULTS: We included 51 studies; in the majority, mean/median CVP values were...... the lower 95% CI crossed 0.50. We identified some positive and negative predictive value for fluid responsiveness for specific low and high values of CVP, respectively, but none of the predictive values were above 66% for any CVPs from 0 to 20 mmHg. There were less data on higher CVPs, in particular >15 mm...

MACK-IV calculates nuclear response functions important to the neutronics analysis of nuclear and fusion systems. A central part of the code deals with the calculation of the nuclear response function for nuclear heating more commonly known as the kerma factor. Pointwise and multigroup neutron kerma factors, individual reactions, helium, hydrogen, and tritium production response functions are calculated from any basic nuclear data library in ENDF/B format. The program processes all reactions in the energy range of 0 to 20 MeV for fissionable and nonfissionable materials. The program also calculates the gamma production cross sections and the gamma production energy matrix. A built-in computational capability permits the code to calculate the cross sections in the resolved and unresolved resonance regions from resonance parameters in ENDF/B with an option for Doppler broadening. All energy pointwise and multigroup data calculated by the code can be punched, printed and/or written on tape files. Multigroup response functions (e.g., kerma factors, reaction cross sections, gas production, atomic displacements, etc.) can be outputted in the format of MACK-ACTIVITY-Table suitable for direct use with current neutron (and photon) transport codes

The aim of this report is to present a calculation of the authorities' total future costs in terms of their activities to monitor a safe and prudent decommission of the power plants, including the long term storage of the used fuel. An assessment of the inherent uncertainty in this estimate is also made. The study forms an integrated part of the total monitoring into the financial assessment of the whole programme of decommissioning and demolition of the Swedish nuclear power plants that continuously is made by SKI. Hence, the estimate is presented with the additional function of supporting SKI's annual calculations of fees and contingencies in accordance with the Swedish Finance Act. Main result The expected Net Present Value of the authorities' costs as at January 2004 price level have consequently been estimated as follows: Mean Value (M): 2303 MSEK Standard deviation (S): 538 MSEK 8 This result tallies with the corresponding prognoses for the last two years. An additional clarification of a number of key figures resulted in some reduction of the total Net Present Value. By way of supplement to this main result, the undiscounted costs have been estimated (calculation 2). Besides, a tentative estimate has also been made by incorporating the official view on the future development on real rate of return in a long time perspective (calculation 3). The uncertainty of the result is significant in terms of the final budget figures and of the scale of the supplementary amounts in those situations where the budgets must be prudent and conservative. There may be a potential for further reduction of the current uncertainty in that the greatest causes of uncertainty now have been identified and ranked in order of priority. The greatest causes of uncertainty are set out in the table below. No. Cause of uncertainty Uncertainty group 1 Priority 2 1 Correction allowing for the uncertainty of the real interest rate N 55 % 2 Productivity E2 8 % 3 Uncertainty in the current base

National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...

We report optimized auxiliary basis sets for use with the Karlsruhe segmented contracted basis sets including moderately diffuse basis functions (Rappoport and Furche, J. Chem. Phys., 2010, 133, 134105) in resolution-of-the-identity (RI) post-self-consistent field (post-SCF) computations for the elements H-Rn (except lanthanides). The errors of the RI approximation using optimized auxiliary basis sets are analyzed on a comprehensive test set of molecules containing the most common oxidation states of each element and do not exceed those of the corresponding unaugmented basis sets. During these studies an unsatisfying performance of the def2-SVP and def2-QZVPP auxiliary basis sets for Barium was found and improved sets are provided. We establish the versatility of the def2-SVPD, def2-TZVPPD, and def2-QZVPPD basis sets for RI-MP2 and RI-CC (coupled-cluster) energy and property calculations. The influence of diffuse basis functions on correlation energy, basis set superposition error, atomic electron affinity, dipole moments, and computational timings is evaluated at different levels of theory using benchmark sets and showcase examples.

This paper presents a study on the seismic response trends evaluation and finite element model updating of a reinforced concrete building monitored for a period of more than two years. The three storey reinforced concrete building is instrumented with five tri-axial accelerometers and a free-field tri-axial accelerometer. The time domain N4SID system identification technique was used to obtain the frequencies and damping ratios considering flexible base models taking into account the soil-structure-interaction (SSI) using 50 earthquakes. Trends of variation of seismic response were developed by correlating the peak response acceleration at the roof level with identified frequencies and damping ratios. A general trend of decreasing frequencies was observed with increased level of shaking. To simulate the behavior of the building, a three dimensional finite element model (FEM) was developed. To incorporate real in-situ conditions, soil underneath the foundation and around the building was modeled using spring elements and non-structural components (claddings and partitions) were also included. The developed FEM was then calibrated using a sensitivity based model updating technique taking into account soil flexibility and non-structural components as updating parameters. It was concluded from the investigation that knowledge of the variation of seismic response of buildings is necessary to better understand their behavior during earthquakes, and also that the participation of soil and non-structural components is significant towards the seismic response of the building and these should be considered in models to simulate the real behavior.

The renal impulse response function (Renal IRF) is the time-activity curve measured over one kidney after injection of a radiopharmaceutical in the renal artery. If the tracer is injected intravenously it is possible to compute the renal IRF by deconvoluting the kidney curve by a blood curve. In previous work we demonstrated that the computed IRF is in good agreement with measurements made after injection in the renal artery. The goal of the present work is the analysis of the effect of sampling errors and the influence of extra-renal activity. The sampling error is only important for the first point of the plasma curve and yields an ill-conditioned function P -1 . The addition of 50 computed renal IRF's demonstrated that the three first points show a larger variability due to incomplete mixing of the tracer. These points should thus not be included in the smoothing process. Subtraction of non-renal activity does not modify appreciably the shape of the renal IRF. The mean transit time and the time to half value are almost independent of non-renal activity and seem to be the parameters of choice

A program entitled FRP Mk 1, for computing the frequency response of a linear system, with transport delays, has been developed previously. The present report considers the minimisation of time and storage requirements. In particular, if the system is described by a set of first order differential and algebraic equations, some variables, specified by the programmer may be eliminated by the computer. The method is incorporated in the KDF 9/EGDON code FRP Mk 2, and includes special non-numeric, compiler subroutines for input of the equations and other data in a simple form orientated towards the analyst. The input scheme used for the equations is compatible with that used for the pole-zero, or transfer function program, ZIP so that the same card-deck may be used for data entry in both codes. The code FRP Mk 2 was designed to be used for the analysis of nuclear reactor power systems, but is equally applicable to most forms of process plant, especially chemical plant. (author)

In order to optimize nuclear fuel utilization, as far as irradiation and storage are concerned, the Research and Development Division of Electricite de France (EDF) developed as fast and accurate software that simulates a fuel assembly life from the inside-reactor stay to the final repository: STRAPONTIN. The discrepancies between reference calculations and STRAPONTIN are generally smaller than 5 %. Moreover, the low calculation time enables to couple STRAPONTIN to any large code in order to widen its scope without impairing its CPU time. (authors)

Full Text Available Abstract Background Gait and balance impairments may increase the risk of falls, the leading cause of accidental death in the elderly population. Fall-related injuries constitute a serious public health problem associated with high costs for society as well as human suffering. A rapid step is the most important protective postural strategy, acting to recover equilibrium and prevent a fall from initiating. It can arise from large perturbations, but also frequently as a consequence of volitional movements. We propose to use a novel water-based training program which includes specific perturbation exercises that will target the stepping responses that could potentially have a profound effect in reducing risk of falling. We describe the water-based balance training program and a study protocol to evaluate its efficacy (Trial registration number #NCT00708136. Methods/Design The proposed water-based training program involves use of unpredictable, multi-directional perturbations in a group setting to evoke compensatory and volitional stepping responses. Perturbations are made by pushing slightly the subjects and by water turbulence, in 24 training sessions conducted over 12 weeks. Concurrent cognitive tasks during movement tasks are included. Principles of physical training and exercise including awareness, continuity, motivation, overload, periodicity, progression and specificity were used in the development of this novel program. Specific goals are to increase the speed of stepping responses and improve the postural control mechanism and physical functioning. A prospective, randomized, cross-over trial with concealed allocation, assessor blinding and intention-to-treat analysis will be performed to evaluate the efficacy of the water-based training program. A total of 36 community-dwelling adults (age 65–88 with no recent history of instability or falling will be assigned to either the perturbation-based training or a control group (no training

Gait and balance impairments may increase the risk of falls, the leading cause of accidental death in the elderly population. Fall-related injuries constitute a serious public health problem associated with high costs for society as well as human suffering. A rapid step is the most important protective postural strategy, acting to recover equilibrium and prevent a fall from initiating. It can arise from large perturbations, but also frequently as a consequence of volitional movements. We propose to use a novel water-based training program which includes specific perturbation exercises that will target the stepping responses that could potentially have a profound effect in reducing risk of falling. We describe the water-based balance training program and a study protocol to evaluate its efficacy (Trial registration number #NCT00708136). The proposed water-based training program involves use of unpredictable, multi-directional perturbations in a group setting to evoke compensatory and volitional stepping responses. Perturbations are made by pushing slightly the subjects and by water turbulence, in 24 training sessions conducted over 12 weeks. Concurrent cognitive tasks during movement tasks are included. Principles of physical training and exercise including awareness, continuity, motivation, overload, periodicity, progression and specificity were used in the development of this novel program. Specific goals are to increase the speed of stepping responses and improve the postural control mechanism and physical functioning. A prospective, randomized, cross-over trial with concealed allocation, assessor blinding and intention-to-treat analysis will be performed to evaluate the efficacy of the water-based training program. A total of 36 community-dwelling adults (age 65-88) with no recent history of instability or falling will be assigned to either the perturbation-based training or a control group (no training). Voluntary step reaction times and postural stability

No effective treatment has been developed for bone-metastatic breast cancer. We found 3 cases with clinical complete response (cCR) of the bone metastasis and longer overall survival of the retrospectively examined cohort treated comprehensively including autologous formalin-fixed tumor vaccine (AFTV). AFTV was prepared individually for each patient from their own formalin-fixed and paraffin-embedded breast cancer tissues. Three patients maintained cCR status of the bone metastasis for 17 months or more. Rate of cCR for 1 year or more appeared to be 15% (3/20) after comprehensive treatments including AFTV. The median overall survival time (60.0 months) and the 3- to 8-year survival rates after diagnosis of bone metastasis were greater than those of historical control cohorts in Japan (1988-2002) and in the nationwide population-based cohort study of Denmark (1999-2007). Bone-metastatic breast cancer may be curable after comprehensive treatments including AFTV, although larger scale clinical trial is required.

Full Text Available Introduction. No effective treatment has been developed for bone-metastatic breast cancer. We found 3 cases with clinical complete response (cCR of the bone metastasis and longer overall survival of the retrospectively examined cohort treated comprehensively including autologous formalin-fixed tumor vaccine (AFTV. Patients and Methods. AFTV was prepared individually for each patient from their own formalin-fixed and paraffin-embedded breast cancer tissues. Results. Three patients maintained cCR status of the bone metastasis for 17 months or more. Rate of cCR for 1 year or more appeared to be 15% (3/20 after comprehensive treatments including AFTV. The median overall survival time (60.0 months and the 3- to 8-year survival rates after diagnosis of bone metastasis were greater than those of historical control cohorts in Japan (1988–2002 and in the nationwide population-based cohort study of Denmark (1999–2007. Conclusion. Bone-metastatic breast cancer may be curable after comprehensive treatments including AFTV, although larger scale clinical trial is required.

This is the third in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals with the calculation procedures associated with a step-feed process. Illustrations and examples are included to…

By measuring neutron fluxes at different locations throughout a core, it's possible to derive the power-density profile P k (W cm - 3), at an axial depth z of fuel rod k. Micro-pocket fission detectors (MPFD) have been fabricated to perform such in-core neutron flux measurements. The purpose of this study is to develop a mathematical model to obtain axial power density distributions in the fuel rods from the in-core responses of the MPFDs

Metallized racetrack vacuum chambers will be used in the pulsed magnets of the Austrian cancer therapy and research facility, MedAustron. It is important that the metallization does not unduly degrade field rise and fall times or the flattop of the field pulse in the kicker magnets. This was of particular concern for a tune kicker magnet, which has a specified rise and fall time of 100 ns. The impact of the metallization, upon the transient field response, has been studied using Finite Element Method (FEM) simulations: the dependency of the field response to the metallization thickness and resistivity are presented in this paper and formulae for the field response, for a ramped transient excitation current, are given. An equivalent circuit for the metallization allows the effect of an arbitrary excitation to be studied, with a circuit simulator, and the circuit optimized. Furthermore, results of simulations of the effect of a magnetic brazing collar, located between the ceramic vacuum chamber and flange, of t...

Compliance with the clinical practice guidelines of sepsis management has been low. The objective of our study was to describe the results of implementing a multifaceted intervention including an electronic alert (e-alert) with a sepsis response team (SRT) on the outcome of patients with sepsis and septic shock presenting to the emergency department. This was a pre-post two-phased implementation study that consisted of a pre-intervention phase (January 01, 2011-September 24, 2012), intervention phase I (multifaceted intervention including e-alert, from September 25, 2012-March 03, 2013) and intervention phase II when SRT was added (March 04, 2013-October 30, 2013) in a 900-bed tertiary-care academic hospital. We recorded baseline characteristics and processes of care in adult patients presenting with sepsis or septic shock. The primary outcome measures were hospital mortality. Secondary outcomes were the need for mechanical ventilation and length of stay in the intensive unit and in the hospital. After implementing the multifaceted intervention including e-alert and SRT, cases were identified with less severe clinical and laboratory abnormalities and the processes of care improved. When adjusted to propensity score, the interventions were associated with reduction in hospital mortality [for intervention phase II compared to pre-intervention: adjusted odds ratio (aOR) 0.71, 95% CI 0.58-0.85, p = 0.003], reduction in the need for mechanical ventilation (aOR 0.45, 95% CI 0.37-0.55, p mechanical ventilation and reduction in hospital mortality and LOS.

FORTRAN computer subroutines stemming from requirements to process state variable system equations for systems of high order are presented. They find the characteristic equation of a matrix using the method of Danilevsky, the number of roots with positive real parts using the Routh-Horwitz alternate formulation, convert a state variable system description to a Laplace transfer function using the method of Bollinger, and evaluate that transfer function and obtain its frequency response. A sample problem is presented to demonstrate use of the subroutines.

Presented here is an application of the Response Matrix (RM) method for adjoint discrete ordinates (S{sub N}) problems in slab geometry applied to energy-dependent source-detector problems. The adjoint RM method is free from spatial truncation errors, as it generates numerical results for the adjoint angular fluxes in multilayer slabs that agree with the numerical values obtained from the analytical solution of the energy multigroup adjoint SN equations. Numerical results are given for two typical source-detector problems to illustrate the accuracy and the efficiency of the offered RM computer code. (author)

The sensitivities (in m) of bare LR115 detectors and detectors in diffusion chambers to 222 Rn and 220 Rn chains are calculated by the Monte Carlo method. The partial sensitivities of bare detectors to the 222 Rn chain are larger than those to the 220 Rn chain, which is due to the higher energies of alpha particles in the 220 Rn chain and the upper energy limit for detection for the LR115 detector. However, the total sensitivities are approximately equal because 220 Rn is always in equilibrium with its first progeny, which is not the case for the 222 Rn chain. The total sensitivity of bare LR115 detectors to 222 Rn chain depends linearly on the equilibrium factor. The overestimation in 222 Rn measurements with bare detectors caused by 220 Rn in air can reach 10% in normal environmental conditions. An analytical relationship between the equilibrium factor and the ratio between track densities on the bare detector and the detector enclosed in chamber is given in the last part of the paper. This ratio is also affected by 220 Rn, which can disturb the determination of the equilibrium factor

National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

The overall objective of the work scope covered by this technical work plan (TWP) is to develop new damage abstractions for the seismic scenario class in total system performance assessment (TSPA). The new abstractions will be based on a new set of waste package and drip shield damage calculations in response to vibratory ground motion and fault displacement. The new damage calculations, which are collectively referred to as damage models in this TWP, are required to represent recent changes in waste form packaging and in the regulatory time frame. The new damage models also respond to comments from the Independent Validation Review Team (IVRT) postvalidation review of the draft TSPA model regarding performance of the drip shield and to an Additional Information Need (AIN) from the U.S. Nuclear Regulatory Commission (NRC)

The overall objective of the work scope covered by this technical work plan (TWP) is to develop new damage abstractions for the seismic scenario class in total system performance assessment (TSPA). The new abstractions will be based on a new set of waste package and drip shield damage calculations in response to vibratory ground motion and fault displacement. The new damage calculations, which are collectively referred to as damage models in this TWP, are required to represent recent changes in waste form packaging and in the regulatory time frame. The new damage models also respond to comments from the Independent Validation Review Team (IVRT) postvalidation review of the draft TSPA model regarding performance of the drip shield and to an Additional Information Need (AIN) from the U.S. Nuclear Regulatory Commission (NRC).

Relativistic density functional calculations were carried out on several nickel toroid mercaptides of the general formula [Ni(μ-SR)(2)](n), with the aim to characterize and analyze their stability and magnetic response properties, in order to gain more insights into their stabilization and size-dependent behavior. The Ni-ligand interaction has been studied by means projected density of states and energy decomposition analysis, which denotes its stabilizing character. The graphical representation of the response to an external magnetic field is applied for the very first time taking into account the spin-orbit term. This map allows one to clearly characterize the magnetic behavior inside and in the closeness of the toroid structure showing the prescence of paratropic ring currents inside the Ni(n) ring, and by contrast, diatropic currents confined in each Ni(2)S(2) motif denoting an aromatic behavior (in terms of magnetic criteria). The calculated data suggests that the Ni(2)S(2) moiety can be regarded as a stable constructing block, which can afford several toroid structures of different nuclearities in agreement with that reported in the experimental literature. In addition, the effects of the relativistic treatment over the magnetic response properties on these lighter compounds are denoted by comparing nonrelativistic, scalar relativistic, and scalar plus spin-orbit relativistic treatments, showing their acting, although nonpronunced, role.

An existing driver-vehicle model with neuromuscular dynamics is improved in the areas of cognitive delay, intrinsic muscle dynamics and alpha-gamma co-activation. The model is used to investigate the influence of steering torque feedback and neuromuscular dynamics on the vehicle response to lateral force disturbances. When steering torque feedback is present, it is found that the longitudinal position of the lateral disturbance has a significant influence on whether the driver's reflex response reinforces or attenuates the effect of the disturbance. The response to angle and torque overlay inputs to the steering system is also investigated. The presence of the steering torque feedback reduced the disturbing effect of torque overlay and angle overlay inputs. Reflex action reduced the disturbing effect of a torque overlay input, but increased the disturbing effect of an angle overlay input. Experiments on a driving simulator showed that measured handwheel angle response to an angle overlay input was consistent with the response predicted by the model with reflex action. However, there was significant intra- and inter-subject variability. The results highlight the significance of a driver's neuromuscular dynamics in determining the vehicle response to disturbances.

SRD 166 MEMS Calculator (Web, free access) This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.

Full Text Available It is widely believed that innate immune responses to Borrelia burgdorferi (Bb are primarily triggered by the spirochete's outer membrane lipoproteins signaling through cell surface TLR1/2. We recently challenged this notion by demonstrating that phagocytosis of live Bb by peripheral blood mononuclear cells (PBMCs elicited greater production of proinflammatory cytokines than did equivalent bacterial lysates. Using whole genome microarrays, we show herein that, compared to lysates, live spirochetes elicited a more intense and much broader transcriptional response involving genes associated with diverse cellular processes; among these were IFN-beta and a number of interferon-stimulated genes (ISGs, which are not known to result from TLR2 signaling. Using isolated monocytes, we demonstrated that cell activation signals elicited by live Bb result from cell surface interactions and uptake and degradation of organisms within phagosomes. As with PBCMs, live Bb induced markedly greater transcription and secretion of TNF-alpha, IL-6, IL-10 and IL-1beta in monocytes than did lysates. Secreted IL-18, which, like IL-1beta, also requires cleavage by activated caspase-1, was generated only in response to live Bb. Pro-inflammatory cytokine production by TLR2-deficient murine macrophages was only moderately diminished in response to live Bb but was drastically impaired against lysates; TLR2 deficiency had no significant effect on uptake and degradation of spirochetes. As with PBMCs, live Bb was a much more potent inducer of IFN-beta and ISGs in isolated monocytes than were lysates or a synthetic TLR2 agonist. Collectively, our results indicate that the enhanced innate immune responses of monocytes following phagocytosis of live Bb have both TLR2-dependent and -independent components and that the latter induce transcription of type I IFNs and ISGs.

Induction chemotherapy and concurrent chemoradiation for responders or immediate surgery for non-responders is an effective treatment strategy head and neck squamous cell carcinoma (HNSCC) of the larynx and oropharynx. Biomarkers that predict outcome would be valuable in selecting patients for therapy. In this study, the presence and titer of high risk human papilloma virus (HPV) and expression of epidermal growth factor receptor (EGFR) in pre-treatment biopsies, as well as smoking and gender were examined in oropharynx cancer patients enrolled in an organ sparing trial. HPV16 copy number was positively associated with response to therapy and with overall and disease specific survival, whereas EGFR expression, current or former smoking behavior, and female gender (in this cohort) were associated with poor response and poor survival in multivariate analysis. Smoking cessation and strategies to target EGFR may be useful adjuncts for therapy to improve outcome in the cases with the poorest biomarker profile

Error-prone and error-free DNA damage repair responses that are induced in most bacteria after exposure to various chemicals, antibiotics or radiation sources were surveyed across the genus Acinetobacter. The error-prone SOS mutagenesis response occurs when DNA damage induces a cell’s umuDC- or dinP-encoded error-prone polymerases. The model strain Acinetobacter baylyi ADP1 possesses an unusual, regulatory umuD allele (umuDAb) with an extended 5′ region and only incomplete fragments of umuC. Diverse Acinetobacter species were investigated for the presence of umuDC and their ability to conduct UV-induced mutagenesis. Unlike ADP1, most Acinetobacter strains possessed multiple umuDC loci containing either umuDAb or a umuD allele resembling that of Escherichia coli. The nearly omnipresent umuDAb allele was the ancestral umuD in Acinetobacter, with horizontal gene transfer accounting for over half of the umuDC operons. Despite multiple umuD(Ab)C operons in many strains, only three species conducted UV-induced mutagenesis: Acinetobacter baumannii, Acinetobacter ursingii and Acinetobacter beijerinckii. The type of umuDC locus or mutagenesis phenotype a strain possessed was not correlated with its error-free response of survival after UV exposure, but similar diversity was apparent. The survival of 30 Acinetobacter strains after UV treatment ranged over five orders of magnitude, with the Acinetobacter calcoaceticus–A. baumannii (Acb) complex and haemolytic strains having lower survival than non-Acb or non-haemolytic strains. These observations demonstrate that a genus can possess a range of DNA damage response mechanisms, and suggest that DNA damage-induced mutation could be an important part of the evolution of the emerging pathogens A. baumannii and A. ursingii. PMID:22117008

A simplified three-dimensional Monte Carlo simulation model of in vitro tumor growth and response to fractionated radiotherapeutic schemes is presented in this paper. The paper aims at both the optimization of radiotherapy and the provision of insight into the biological mechanisms involved in tumor development. The basics of the modeling philosophy of Duechting have been adopted and substantially extended. The main processes taken into account by the model are the transitions between the cell cycle phases, the diffusion of oxygen and glucose, and the cell survival probabilities following irradiation. Specific algorithms satisfactorily describing tumor expansion and shrinkage have been applied, whereas a novel approach to the modeling of the tumor response to irradiation has been proposed and implemented. High-performance computing systems in conjunction with Web technologies have coped with the particularly high computer memory and processing demands. A visualization system based on the MATLAB software package and the virtual-reality modeling language has been employed. Its utilization has led to a spectacular representation of both the external surface and the internal structure of the developing tumor. The simulation model has been applied to the special case of small cell lung carcinoma in vitro irradiated according to both the standard and accelerated fractionation schemes. A good qualitative agreement with laboratory experience has been observed in all cases. Accordingly, the hypothesis that advanced simulation models for the in silico testing of tumor irradiation schemes could substantially enhance the radiotherapy optimization process is further strengthened. Currently, our group is investigating extensions of the presented algorithms so that efficient descriptions of the corresponding clinical (in vivo) cases are achieved.

The MCNP5-beta code was used to calculate the reaction rate 1 0B(n,α) 7 Li and the neutron energy response matrix of a neutron spectrometer consisting of a water sphere with variable diameter and detector BF 3 using point and disk neutron sources 2 41Am-Be. The reaction rate and the response matrix of disk neutron source shows higher value than these obtained from the point neutron source. The response of the matrix disk neutron source in the energy range from 4.14x10 - 7 to 11.09 MeV show a maximum value for sphere of 12 inch diameter, where the response with point neutron source stile increasing calculated value in this condition .The calculated values of neutron energy responses for a disk neutron source agreed well with published results. (author)

Gatekeeper training is a widely used prevention method for training local community members to recognize the signs and symptoms of suicide and to support appropriate referrals for mental health. Training community "gatekeepers" is critical for increasing access to care for those youth who are in need, as youth often turn first to family and friends for help. This study examines the outcomes at pre-training, post-training, and 3-month follow-up for American Indian and Alaska Native (AI/AN) students, teachers, and faculty completing online role-play gatekeeper training simulations. The simulations use emotionally responsive avatars that have memory and personality, and respond like real students experiencing psychological distress in realistic situations. Data from 86 matched pairs showed significant increases in self-identified gatekeeper attitudes of preparedness, likelihood (behavioral intent) and self-efficacy to engage in helping behaviors (i.e., identifying those in psychological distress, talking to them, and supporting a referral for services) 3 months after training. This study provides promising evidence for use of online avatar-based training with AI/AN communities and has the potential to address many of the current challenges with gatekeeper training in Indian Country.

Field data showing the daily patterns in body temperature (T(b)) of kangaroos in hot, arid conditions, with and without water, indicate the use of adaptive heterothermy, i.e. large variation in T(b). However, daily T(b) variation was greater in the Eastern Grey Kangaroo (Macropus giganteus), a species of mesic origin, than in the desert-adapted Red Kangaroo (Macropus rufus). The nature of such responses was studied by an examination of their thermal adjustments to dehydration in thermoneutral temperatures (25 degrees C) and at high temperature (45 degrees C) via the use of tame, habituated animals in a climate chamber. At the same level of dehydration M. rufus was less impacted, in that its T(b) changed less than that for M. giganteus while it evaporated significantly less water. At a T(a) of 45 degrees C with water restriction T(b) reached 38.9 +/- 0.3 degrees C in M. rufus compared with 40.2 +/- 0.4 degrees C for M. giganteus. The ability of M. rufus to reduce dry conductance in the heat while dehydrated was central to its superior thermal control. While M. giganteus showed more heterothermy, i.e. its T(b) varied more, this seemed due to a lower tolerance of dehydration in concert with a strong thermal challenge. The benefits of heterothermy to M. giganteus were also limited because of thermal (Q(10)) effects on metabolic heat production and evaporative heat loss. The impacts of T(b) on heat production were such that low morning T(b)'s seen in the field may be associated with energy saving, as well as water saving. Kangaroos respond to dehydration and heat similarly to many ungulates, and it is apparent that the accepted notions about adaptive heterothermy in large desert mammals may need revisiting.

Bacteria growing in biofilms are physiologically heterogeneous, due in part to their adaptation to local environmental conditions. Here, we characterized the local transcriptome responses of Pseudomonas aeruginosa growing in biofilms by using a microarray analysis of isolated biofilm subpopulations. The results demonstrated that cells at the top of the biofilms had high mRNA abundances for genes involved in general metabolic functions, while mRNA levels for these housekeeping genes were low in cells at the bottom of the biofilms. Selective green fluorescent protein (GFP) labeling showed that cells at the top of the biofilm were actively dividing. However, the dividing cells had high mRNA levels for genes regulated by the hypoxia-induced regulator Anr. Slow-growing cells deep in the biofilms had little expression of Anr-regulated genes and may have experienced long-term anoxia. Transcripts for ribosomal proteins were associated primarily with the metabolically active cell fraction, while ribosomal RNAs were abundant throughout the biofilms, indicating that ribosomes are stably maintained even in slowly growing cells. Consistent with these results was the identification of mRNAs for ribosome hibernation factors (the rmf and PA4463 genes) at the bottom of the biofilms. The dormant biofilm cells of a P. aeruginosa Δrmf strain had decreased membrane integrity, as shown by propidium iodide staining. Using selective GFP labeling and cell sorting, we show that the dividing cells are more susceptible to killing by tobramycin and ciprofloxacin. The results demonstrate that in thick P. aeruginosa biofilms, cells are physiologically distinct spatially, with cells deep in the biofilm in a viable but antibiotic-tolerant slow-growth state. PMID:22343293

This report describes NNC development of a demonstration concept called Interact of Visual Display Unit (VDU) displays, integrating on-screen control of plant actions. Most plant vendors now propose on-screen control and it is being included on some plants. The integration of Station Operating Instructions (SOI) into VDU presentation of plants is being developed rapidly. With on-screen control, SOIs can be displayed with control targets able to initiate plant control, directly as called for in the SOIs. Interact displays information and control options, using a cursor to simulate on-screen display and plant control. The displays show a method which integrates soft control and SOI information into a single unified presentation. They simulate the SOI for an accident, on-screen, with simulated inserted plant values

Reviewed is the effect of heat flux of different system parameters on critical density in order to give an initial view on the value of several parameters. A thorough analysis of different equations is carried out to calculate burnout is steam-water flows in uniformly heated tubes, annular, and rectangular channels and rod bundles. Effect of heat flux density distribution and flux twisting on burnout and storage determination according to burnout are commended [ru

Microalgae of the genus Nannochloropsis are capable of accumulating triacylglycerols (TAGs) when exposed to nutrient limitation (in particular, nitrogen [N]) and are therefore considered promising organisms for biodiesel production. Here, after nitrogen removal from the medium, Nannochloropsis gaditana cells showed extensive triacylglycerol accumulation (38% TAG on a dry weight basis). Triacylglycerols accumulated during N deprivation harbored signatures, indicating that they mainly stemmed from freshly synthesized fatty acids, with a small proportion originating from a recycling of membrane glycerolipids. The amount of chloroplast galactoglycerolipids, which are essential for the integrity of thylakoids, decreased, while their fatty acid composition appeared to be unaltered. In starved cells, galactolipids were kept at a level sufficient to maintain chloroplast integrity, as confirmed by electron microscopy. Consistently, N-starved Nannochloropsis cells contained less photosynthetic membranes but were still efficiently performing photosynthesis. N starvation led to a modification of the photosynthetic apparatus with a change in pigment composition and a decrease in the content of all the major electron flow complexes, including photosystem II, photosystem I, and the cytochrome b6f complex. The photosystem II content was particularly affected, leading to the inhibition of linear electron flow from water to CO2. Such a reduction, however, was partially compensated for by activation of alternative electron pathways, such as cyclic electron transport. Overall, these changes allowed cells to modify their energetic metabolism in order to maintain photosynthetic growth. PMID:23457191

Textbook on design of large panel building including rules on robustness and a method for producing the Statical documentattion......Textbook on design of large panel building including rules on robustness and a method for producing the Statical documentattion...

EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico).

How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en­ couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...

Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

The joint working group of ICRP/ICRU is advancing the works of reviewing the ICRP publication 51 by investigating the data related to radiation protection. In order to introduce the 1990 recommendation, it has been demanded to carry out calculation for neutrons, photons and electrons. As for electrons, EURADOS WG4 (Numerical Dosimetry) rearranged the data to be calculated at the meeting held in PTB Braunschweig in June, 1992, and the question and request were presented by Dr. J.L. Chartier, the responsible person, to the researchers who are likely to undertake electron transport Monte Carlo calculation. The author also has carried out the requested calculation as it was the good chance to do the mutual comparison among various computation codes regarding electron transport calculation. The content that the WG requested to calculate was the absorbed dose at depth d mm when parallel electron beam enters at angle α into flat plate phantoms of PMMA, water and ICRU4-element tissue, which were placed in vacuum. The calculation was carried out by the versatile electron-photon shower computation Monte Carlo code, EGS4. As the results, depth dose curves and the dependence of absorbed dose on electron energy, incident angle and material are reported. The subjects to be investigated are pointed out. (K.I.)

Radiological dosage principles, as well as methods for calculating external and internal dose rates, following dispersion and deposition of radioactive materials in the atmosphere are described. Emphasis has been placed on analytical solutions that are appropriate for hand calculations. In addition, the methods for calculating dose rates from ingestion are discussed. A brief description of several computer programs are included for information on radionuclides. There has been no attempt to be comprehensive, and only a sampling of programs has been selected to illustrate the variety available

The current methods of quantum chemical calculations will be reviewed. The accent will be on the accuracy that can be achieved with these methods. The basis set requirements and computer resources for the various methods will be discussed. The utility of the methods will be illustrated with some examples, which include the calculation of accurate bond energies for SiF$_n$ and SiF$_n^+$ and the modeling of chemical data storage.

An apparatus for measuring the velocity of a vehicle traveling between first and second measured points. The apparatus includes a cylindrical housing having an open top for receiving a transparent disk. Indicia representing speed calibrations is circumferentially spaced adjacent an outer perimeter of the disk. A stopwatch is carried in the housing below said disk and has a rotatable hand which rotates at a predetermined rate under the indicia. A lamp is carried below the stopwatch for illuminating the indicia carried on the transparent disk. The stopwatch is started when the vehicle passes a first reference point and stopped when the vehicle passes the second reference point. Thus, when the hand is stopped, such points to the calibrated indicia on said disk indicating the velocity of a vehicle.

Restricted-spin coupled-cluster single-double plus perturbative triple excitation {RCCSD(T)} calculations were carried out on the X (2)B(1) and A (2)A(1) states of AsH(2) employing the fully relativistic small-core effective core potential (ECP10MDF) for As and basis sets of up to the augmented correlation-consistent polarized valence quintuple-zeta (aug-cc-pV5Z) quality. Minimum-energy geometrical parameters and relative electronic energies were evaluated, including contributions from extrapolation to the complete basis set limit and from outer core correlation of the As 3d(10) electrons employing additional tight 4d3f2g2h functions designed for As. In addition, simplified, explicitly correlated CCSD(T)-F12 calculations were also performed employing different atomic orbital basis sets of up to aug-cc-pVQZ quality, and associated complementary auxiliary and density-fitting basis sets. The best theoretical estimate of the relative electronic energy of the A (2)A(1) state of AsH(2) relative to the X (2)B(1) state including zero-point energy correction (T(0)) is 19,954(32) cm(-1), which agrees very well with available experimental T(0) values of 19,909.4531(18) and 19,909.4910(17) cm(-1) obtained from recent laser induced fluorescence and cavity ringdown absorption spectroscopic studies. In addition, potential energy functions (PEFs) of the X (2)B(1) and A (2)A(1) states of AsH(2) were computed at different RCCSD(T) and CCSD(T)-F12 levels. These PEFs were used in variational calculations of anharmonic vibrational wave functions, which were then utilized to calculate Franck-Condon factors (FCFs) between these two states, using a method which includes allowance for anharmonicity and Duschinsky rotation. The A(0,0,0)-X single vibronic level (SVL) emission spectrum of AsH(2) was simulated using these computed FCFs. Comparison between simulated and available experimental vibrationally resolved spectra of the A(0,0,0)-X SVL emission of AsH(2), which consist essentially of

Cold protective clothing was studied in 2 European Union projects. The objectives were (a) to examine different insulation calculation methods as measured on a manikin (serial or parallel), for the prediction of cold stress (IREQ); (b) to consider the effects of cold protective clothing on metabolic

The first-order hyperpolarizability, β, has been calculated for a group of marine natural products, the makaluvamines. These compounds possess a common cationic pyrroloiminoquinone structure that is substituted to varying degrees. Calculations at the MP2 level indicate that makaluvamines possessing phenolic side chains conjugated with the pyrroloiminoquinone moiety display large β values, while breaking this conjugation leads to a dramatic decrease in the calculated hyperpolarizability. This is consistent with a charge-transfer donor-π-acceptor (D-π-A) structure type, characteristic of nonlinear optical chromophores. Dynamic hyperpolarizabilities calculated using resonance-convergent time-dependent density functional theory coupled to polarizable continuum model (PCM) solvation suggest that significant resonance enhancement effects can be expected for incident radiation with wavelengths around 800 nm. The results of the current work suggest that the pyrroloiminoquinone moiety represents a potentially useful new chromophore subunit, in particular for the development of molecular probes for biological imaging. The introduction of solvent-solute interactions in the theory is conventionally made in a density matrix formalism, and the present work will provide detailed account of the approximations that need to be introduced in wave function theory and our program implementation. The program implementation as such is achieved by a mere combination of existing modules from previous developments, and it is here only briefly reviewed.

The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)

Comparisons are presented analytically predicted and experimental turbulence responses of a wind tunnel model of a DC-10 derivative wing equipped with an active control system. The active control system was designed for the purpose of flutter suppression, but it had additional benefit of alleviating gust loads (wing bending moment) by about 25%. Comparisions of various wing responses are presented for variations in active control system parameters and tunnel speed. The analytical turbulence responses were obtained using DYLOFLEX, a computer program for dynamic loads analyses of flexible airplanes with active controls. In general, the analytical predictions agreed reasonably well with the experimental data.

The Gilat-Raubenheimer method simplified to tetrahedron division is used to calculate the real and imaginary part of the dynamical response function for electrons. A frequency expansion for the real part is discussed. The Lindhard function is calculated as a test for numerical accuracy...

We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...

Calendar calculation is the ability to quickly name the day that a given date falls on. Previous research has suggested that savant calendar calculation is based on rote memory and the use of rule-based arithmetic skills. The objective of this study was to identify the cognitive processes that distinguish calendar calculation in savant individuals from healthy calendar calculators. Savant calendar calculators with autism (ACC, n=3), healthy calendar calculators (HCC, n=3), non-savant subjects with autism (n=6) and healthy calendar calculator laymen (n=18) were included in the study. All participants calculated dates of the present (current month). In addition, ACC and HCC also calculated dates of the past and future 50 years. ACC showed shorter reaction times and fewer errors than HCC and non-savant subjects with autism, and significantly fewer errors than healthy calendar calculator laymen when calculating dates of the present. Moreover, ACC performed faster and more accurate than HCC regarding past dates. However, no differences between ACC and HCC were detected for future date calculation. The findings may imply distinct calendar calculation strategies in ACC and HCC, with HCC relying on calendar regularities for all types of dates and an involvement of (rote) memory in ACC when processing dates of the past and the present.

This paper presents results of Monte Carlo modeling of the SRP-68-01 survey meter used to measure exposure rates near the thyroid glands of persons exposed to radioactivity following the Chernobyl accident. This device was not designed to measure radioactivity in humans. To estimate the uncertainty associated with the measurement results, a mathematical model of the SRP-68-01 survey meter was developed and verified. A Monte Carlo method of numerical simulation of radiation transport has been used to calculate the calibration factor for the device and evaluate its uncertainty. The SRP-68-01 survey meter scale coefficient, an important characteristic of the device, was also estimated in this study. The calibration factors of the survey meter were calculated for 131I, 132I, 133I, and 135I content in the thyroid gland for six age groups of population: newborns; children aged 1 yr, 5 yr, 10 yr, 15 yr; and adults. A realistic scenario of direct thyroid measurements with an “extended” neck was used to calculate the calibration factors for newborns and one-year-olds. Uncertainties in the device calibration factors due to variability of the device scale coefficient, variability in thyroid mass and statistical uncertainty of Monte Carlo method were evaluated. Relative uncertainties in the calibration factor estimates were found to be from 0.06 for children aged 1 yr to 0.1 for 10-yr and 15-yr children. The positioning errors of the detector during measurements deviate mainly in one direction from the estimated calibration factors. Deviations of the device position from the proper geometry of measurements were found to lead to overestimation of the calibration factor by up to 24 percent for adults and up to 60 percent for 1-yr children. The results of this study improve the estimates of 131I thyroidal content and, consequently, thyroid dose estimates that are derived from direct thyroid measurements performed in Belarus shortly after the Chernobyl accident. PMID:22245289

The problems are summed up of the dynamic calculation of cooling towers with forced and natural air draft. The quantities and relations are given characterizing the simultaneous exchange of momentum, heat and mass in evaporative water cooling by atmospheric air in the packings of cooling towers. The method of solution is clarified in the calculation of evaporation criteria and thermal characteristics of countercurrent and cross current cooling systems. The procedure is demonstrated of the calculation of cooling towers, and correction curves and the effect assessed of the operating mode at constant air number or constant outlet air volume flow on their course in ventilator cooling towers. In cooling towers with the natural air draft the flow unevenness is assessed of water and air relative to its effect on the resulting cooling efficiency of the towers. The calculation is demonstrated of thermal and resistance response curves and cooling curves of hydraulically unevenly loaded towers owing to the water flow rate parameter graded radially by 20% along the cross-section of the packing. Flow rate unevenness of air due to wind impact on the outlet air flow from the tower significantly affects the temperatures of cooled water in natural air draft cooling towers of a design with lower demands on aerodynamics, as early as at wind velocity of 2 m.s -1 as was demonstrated on a concrete example. (author). 11 figs., 10 refs

APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

Recent results from lattice QCD calculations relevant to particle physics phenomenology are reviewed. They include the calculations of strong coupling constant, quark masses, kaon matrix elements, and D and B meson matrix elements. Special emphasis is on the recent progress in the simulations including dynamical quarks.

The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

Interactions between graphene and anatase TiO2 (110) surface with and without oxygen vacancy (VO) are investigated by first-principle calculations. The close but non-destroyed contact at interface facilitates photo-excited electron transfer between graphene and TiO2. With a work function (WF) smaller than perfect TiO2 substrate, graphene is typically electron depleted. However, the introduction of surface VO decreases the WF of TiO2 remarkably and smaller than graphene, which induces electron transfer with reversed direction and accumulate at graphene sheet. Especially, the evident red shift of the optical absorption edge and obviously enhanced absorption intensity in the visible region for both combined configurations illustrate the enhancement mechanism of photocatalytic performance.

Despite extensive efforts on studying the decomposition mechanism of HMX under extreme condition, an intrinsic understanding of mechanical and chemical response processes, inducing the initial chemical reaction, is not yet achieved. In this work, the microscopic dynamic response and initial decomposition of β-HMX with (1 0 0) surface and molecular vacancy under shock condition, were explored by means of the self-consistent-charge density-functional tight-binding method (SCC-DFTB) in conjunction with multiscale shock technique (MSST). The evolutions of various bond lengths and charge transfers were analyzed to explore and understand the initial reaction mechanism of HMX. Our results discovered that the C-N bond close to major axes had less compression sensitivity and higher stretch activity. The charge was transferred mainly from the N-NO2 group along the minor axes and H atom to C atom during the early compression process. The first reaction of HMX primarily initiated with the fission of the molecular ring at the site of the C-N bond close to major axes. Further breaking of the molecular ring enhanced intermolecular interactions and promoted the cleavage of C-H and N-NO2 bonds. More significantly, the dynamic response behavior clearly depended on the angle between chemical bond and shock direction.

On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

Seven activities are presented in this student workbook designed for an exploration of small business ownership and the use of the calculator in this career. Included are simulated situations in which students must use a calculator to compute property taxes; estimate payroll taxes and franchise taxes; compute pricing, approximate salaries,…

BHMcalc provides renditions of the instantaneous circumbinary habital zone (CHZ) and also calculates BHM properties of the system including those related to the rotational evolution of the stellar components and the combined XUV and SW fluxes as measured at different distances from the binary. Moreover, it provides numerical results that can be further manipulated and used to calculate other properties.

This appendix provides the complete compilation of responses received to the questionnaire issued in conjunction with the workshop announcements. The responses are provided as received, with changes made only to the formatting. The OECD Nuclear Energy Agency (NEA) Committee on Nuclear Regulatory Activities (CNRA) Working Group on Inspection Practices (WGIP) sponsored the 12. International Workshop on Nuclear Regulatory Inspection Activities. The workshop was hosted by the U.S. NRC, in Chattanooga, Tennessee, United States of America on 7 -10 April 2014. The three workshop topics that were addressed were as follows: - Inspection of Outage Activities Including Fire Protection Programmes. - Event Response Inspections. - The Impact of Inspection Programmes of the Fukushima Daiichi NPP Accident. Each of the respondents was given the following instructions in relation to their response: - Only one response per country is required. If more than one person from your country is participating, please co-ordinate the responses accordingly. - Please provide responses on separate sheet and clearly identify the questionnaire part and topic. For preparation of the workshop, participants are invited to supply their national inspection approaches used in inspection of events and incidents according to the surveys. Actual issues that were discussed during the workshop were generated by the topic leaders based on the responses submitted by participants with their registration forms. This formats helps to ensure that issues considered most important by the workshop participants are covered during the group discussions. (authors)

Describes four methods of calculating competition to individual trees and compares their effectiveness in explaining the 3-year growth response of northern hardwoods after various mechanized thinning practices.

Next-generation Impact Calculator for quick assessment of impact consequences is preparing. The estimates of impact effects are revised. The possibility to manipulate with the orbital parameters and to determine impact point is included.

In this paper we present theoretical results on the dipole response in the proton spin-saturated {sup 90-94}Zr isotopes. The electric and magnetic dipole excitations are obtained in Hartree-Fock-Bogolyubov plus Quasi-particle Random Phase Approximation (QRPA) calculations performed with the D1M Gogny force. A pnQRPA charge exchange code is used to study the Gamow-Teller response. The results on the pygmy, the giant dipole resonances as well as those on the magnetic nuclear spin-flip excitation and the Gamow-Teller transitions are compared with available experimental or theoretical information. In our approach, the proton pairing plays a role in the phonon excitations, in particular in the M1 nuclear spin-flip resonance. (orig.)

It is well known that when analogs of thymidine containing iodine or bromine are incorporated into the DNA of irradiated cells there is a decrease of the D 0 . Three mechanisms for this effect have been discussed: (a) photoactivation of the Br/I atom and the production of Auger electrons, (b) creation of highly reactive uracil radicals by the interaction of hydrated electrons with BrUdR/IUdR, leading to SSB, and (c) interference with repair or the fixation of the damage by the presence of the Br/I atoms. Experiments to investigate photoactivation of the Br/I atoms will include all three, so that knowledge of the relative size of each contribution is useful. The first process is reasonably well understood and here the second process is examined. It is assumed that the incorporated analogs only produce radicals if they are present in a region of DNA containing energy depositions. An SSB produced by this radical can combine with a nearby SSB produced by electron damage to give a DSB, thus increasing the yield of DSB compared to the yield without the analog present. The increased yields at various levels of Br/I incorporation are compared to experiment for different models of radical action

The purpose of the ''Closure and Sealing Design Calculation'' is to illustrate closure and sealing methods for sealing shafts, ramps, and identify boreholes that require sealing in order to limit the potential of water infiltration. In addition, this calculation will provide a description of the magma that can reduce the consequences of an igneous event intersecting the repository. This calculation will also include a listing of the project requirements related to closure and sealing. The scope of this calculation is to: summarize applicable project requirements and codes relating to backfilling nonemplacement openings, removal of uncommitted materials from the subsurface, installation of drip shields, and erecting monuments; compile an inventory of boreholes that are found in the area of the subsurface repository; describe the magma bulkhead feature and location; and include figures for the proposed shaft and ramp seals. The objective of this calculation is to: categorize the boreholes for sealing by depth and proximity to the subsurface repository; develop drawing figures which show the location and geometry for the magma bulkhead; include the shaft seal figures and a proposed construction sequence; and include the ramp seal figure and a proposed construction sequence. The intent of this closure and sealing calculation is to support the License Application by providing a description of the closure and sealing methods for the Safety Analysis Report. The closure and sealing calculation will also provide input for Post Closure Activities by describing the location of the magma bulkhead. This calculation is limited to describing the final configuration of the sealing and backfill systems for the underground area. The methods and procedures used to place the backfill and remove uncommitted materials (such as concrete) from the repository and detailed design of the magma bulkhead will be the subject of separate analyses or calculations. Post-closure monitoring will not

The main purpose of the workshop was to provide a forum of exchange of information on the regulatory inspection activities. Participants had the opportunity to meet with their counterparts from other countries and organisations to discuss current and future issues on the selected topics. They developed conclusions regarding these issues and hopefully, identified methods to help improve their own inspection programmes. The NEA Committee on Nuclear Regulatory Activities (CNRA) believes that an essential factor in ensuring the safety of nuclear installations is the continuing exchange and analysis of technical information and data. To facilitate this exchange the Committee has established working groups and groups of experts in specialised topics. The Working Group on Inspection Practices (WGIP) was formed in 1990 with the mandate '..to concentrate on the conduct of inspections and how the effectiveness of inspections could be evaluated..'. The WGIP facilitates the exchange of information and experience related to regulatory safety inspections between CNRA member countries. These proceedings cover the 12. International Workshop held by WGIP on regulatory inspection activities. This workshop, which is the twelfth in a series, along with many other activities performed by the Working Group, is directed towards this goal. The consensus from participants at previous workshops, noted that the value of meeting with people from other inspection organisations was one of the most important achievements. The focus of this workshop was on experience gained from regulatory inspection activities in three areas: - Inspection of Outage Activities Including Fire Protection Programmes. - Event Response Inspections. - The Impact of Inspection Programmes of the Fukushima Daiichi Nuclear Power Plant (NPP) Accident. The main objectives of the WGIP workshops are to enable inspectors to meet with inspectors from other organisations, to exchange information regarding regulatory inspection

The aim of the project was to apply a type of statistical calculation model, 'Land Use Regression (LUR)', to predict the concentrations of air pollutants benzene and 1,3-butadiene in a number of urban areas with a high proportion of small-scale biofuel burning in and around Umeaa.

With the recent development of a new computational tool for calculations of nuclear reactors based on the coupling between the PARCS neutron transport code and computational fluid dynamics commercial code (CFD) ANSYS CFX opens new possibilities in the fuel element design that contributes to a better understanding and a better simulation of the processes of heat transfer and specific phenomena of fluid dynamics as the {sup c}rossflow{sup .}.

Flexoelectricity, which is the linear response of polarization to a strain gradient, can have a significant effect on the functional properties of dielectric thin films, superlattices and nanostructures. Despite growing experimental interest, there have been relatively few theoretical studies of flexoelectricity, especially in the context of first-principles calculations. In this talk, we present a complete theory of both the electronic (or ``frozen-ion'')[1] and lattice contributions to flexoelectricity, and demonstrate a supercell method for calculating the flexoelectric coefficients using first-principles density-functional methods. Results are presented for cubic materials including CsCl and SrTiO3. In order to obtain all the elements of the flexoelectric tensor, transverse as well as longitudinal, we carry out calculations on supercells extended along different orientations (e.g., [110] as well as [100]), taking special care to carry out conversions between objects calculated under fixed E or fixed D electric boundary conditions in different parts of the procedure. In this way, all the elements of both the electronic and lattice contributions to the flexoelectric tensor are determined.

In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [French] Les travaux de cette these concernent dans un premier temps l'evaluation des nouveaux materiels de calculs tels que les cartes graphiques ou les puces massivement multicoeurs, et leur application aux problemes de valeurs propres pour la neutronique. Ensuite, dans le but d'utiliser le parallelisme massif des supercalculateurs, nous etudions egalement l'utilisation de methodes hybrides asynchrones pour resoudre des problemes a valeur propre avec ce tres haut niveau de parallelisme. Nous experimentons ensuite le resultat de ces recherches sur plusieurs supercalculateurs nationaux tels que la machine hybride Titane du Centre de Calcul, Recherche et Technologies (CCRT), la machine Curie du Tres Grand Centre de Calcul (TGCC) qui

As part of designing a village electric power system, the present and future electric loads must be defined, including both seasonal and daily usage patterns. However, in many cases, detailed electric load information is not readily available. NREL developed the Alaska Village Electric Load Calculator to help estimate the electricity requirements in a village given basic information about the types of facilities located within the community. The purpose of this report is to explain how the load calculator was developed and to provide instructions on its use so that organizations can then use this model to calculate expected electrical energy usage.

A SENSETWO code for the calculation of cross section sensitivities with a two-dimensional model has been developed, on the basis of first order perturbation theory. It uses forward neutron and/or gamma-ray fluxes and adjoint fluxes obtained by two-dimensional discrete ordinates code TWOTRAN-II. The data and informations of cross sections, geometry, nuclide density, response functions, etc. are transmitted to SENSETWO by the dump magnetic tape made in TWOTRAN calculations. The required input for SENSETWO calculations is thus very simple. The SENSETWO yields as printed output the cross section sensitivities for each coarse mesh zone and for each energy group, as well as the plotted output of sensitivity profiles specified by the input. A special feature of the code is that it also calculates the reaction rate with the response function used as the adjoint source in TWOTRAN adjoint calculation and the calculated forward flux from the TWOTRAN forward calculation. (author)

In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation.......In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....

The Garrad Hassan approach to the prediction of wind turbine loading and response has been developed over the last decade. The goal of this development has been to produce calculation methods that contain realistic representation of the wind, include sensible aerodynamic and dynamic models of the turbine and can be used to predict fatigue and extreme loads for design purposes. The Garrad Hassan calculation method is based on a suite of four key computer programs: WIND3D for generation of the turbulent wind field; EIGEN for modal analysis of the rotor and support structure; BLADED for time domain calculation of the structural loads; and SIGNAL for post-processing of the BLADED predictions. The interaction of these computer programs is illustrated. A description of the main elements of the calculation method will be presented. (au)

In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)

All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo

All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio

The Evapotranspiration Calculator estimates evapotranspiration time series data for hydrological and water quality models for the Hydrologic Simulation Program - Fortran (HSPF) and the Stormwater Management Model (SWMM).

U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...

PQcalc is an online calculator designed to support students in college-level science classes. Unlike a pocket calculator, PQcalc allows students to set up problems within the calculator just as one would on paper. This includes using proper units and naming quantities strategically in a way that helps finding the solution. Results of calculations…

The ground-state energy of a system consisting of four identical bosons or fermions is calculated using the Yakubovsky differential equations which are formulated in configuration space. The solution is restricted to include s waves only. Spline approximation and orthogonal collocation reduce the

The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence can be fine-tuned with great flexibility by choosing appropriate Rydberg states and applying external electric and magnetic fields. More and more experiments are probing this interaction at short atomic distances or with such high precision that perturbative calculations as well as restrictions to the leading dipole-dipole interaction term are no longer sufficient. In this tutorial, we review all relevant aspects of the full calculation of Rydberg interaction potentials. We discuss the derivation of the interaction Hamiltonian from the electrostatic multipole expansion, numerical and analytical methods for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source.

The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence can be fine-tuned with great flexibility by choosing appropriate Rydberg states and applying external electric and magnetic fields. More and more experiments are probing this interaction at short atomic distances or with such high precision that perturbative calculations as well as restrictions to the leading dipole–dipole interaction term are no longer sufficient. In this tutorial, we review all relevant aspects of the full calculation of Rydberg interaction potentials. We discuss the derivation of the interaction Hamiltonian from the electrostatic multipole expansion, numerical and analytical methods for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source. (tutorial)

Coil is a very important magnet component. The turn location and the coil size impact both mechanical and magnetic behavior of the magnet. The Young's modulus plays a significant role in determining the coil location and size. Therefore, Young's modulus study is essential in predicting both the analytical and practical magnet behavior. To determine the coil Young's modulus, an experiment has been conducted to measure azimuthal sizes of a half quadrant QSE101 inner coil under different loading. All measurements are made at four different positions along an 8-inch long inner coil. Each measurement is repeated three times to determine the reproducibility of the experiment. To ensure the reliability of this experiment, the same measurement is performed twice with a open-quotes dummy coil,close quotes which is made of G10 and has the same dimension and similar azimuthal Young's modulus as the inner coil. The difference between the G10 azimuthal Young's modulus calculated from the experiments and its known value from the manufacturer will be compared. Much effort has been extended in analyzing the experimental data to obtain a more reliable Young's modulus. Analysis methods include the error analysis method and the least square method

X-ray absorption spectra of carbon, silicon, germanium, and sulfur compounds have been investigated by means of damped four-component density functional response theory. It is demonstrated that a reliable description of relativistic effects is obtained at both K- and L-edges. Notably, an excellent agreement with experimental results is obtained for L2,3-spectra-with spin-orbit effects well accounted for-also in cases when the experimental intensity ratio deviates from the statistical one of 2 : 1. The theoretical results are consistent with calculations using standard response theory as well as recently reported real-time propagation methods in time-dependent density functional theory, and the virtues of different approaches are discussed. As compared to silane and silicon tetrachloride, an anomalous error in the absolute energy is reported for the L2,3-spectrum of silicon tetrafluoride, amounting to an additional spectral shift of ∼1 eV. This anomaly is also observed for other exchange-correlation functionals, but it is seen neither at other silicon edges nor at the carbon K-edge of fluorine derivatives of ethene. Considering the series of molecules SiH4-XFX with X = 1, 2, 3, 4, a gradual divergence from interpolated experimental ionization potentials is observed at the level of Kohn-Sham density functional theory (DFT), and to a smaller extent with the use of Hartree-Fock. This anomalous error is thus attributed partly to difficulties in correctly emulating the electronic structure effects imposed by the very electronegative fluorines, and partly due to inconsistencies in the spurious electron self-repulsion in DFT. Substitution with one, or possibly two, fluorine atoms is estimated to yield small enough errors to allow for reliable interpretations and predictions of L2,3-spectra of more complex and extended silicon-based systems.

The prognosis of advanced (stage IV) cancer of the digestive organs is very poor. We have previously reported a case of advanced breast cancer with bone metastasis that was successfully treated with combined treatments including autologous formalin-fixed tumor vaccine (AFTV). Herein, we report the success of this approach in advanced stage IV (heavily metastasized) cases of gall bladder cancer and colon cancer. Case 1: A 61-year-old woman with stage IV gall bladder cancer (liver metastasis and lymph node metastasis) underwent surgery in May 2011, including partial resection of the liver. She was treated with AFTV as the first-line adjuvant therapy, followed by conventional chemotherapy. This patient is still alive without any recurrence, as confirmed with computed tomography, for more than 5 years. Case 2: A 64-year-old man with stage IV colon cancer (multiple para-aortic lymph node metastases and direct abdominal wall invasion) underwent non-curative surgery in May 2006. Following conventional chemotherapy, two courses of AFTV and radiation therapy were administered sequentially. This patient has had no recurrence for more than 5 years. We report the success of combination therapy including AFTV in cases of liver-metastasized gall bladder cancer and abdominal wall-metastasized colon cancer. Both patients experienced long-lasting, complete remission. Therefore, combination therapies including AFTV should be considered in patients with advanced cancer of the digestive organs.

Functional description of the programme package Cord-2 for PWR core design calculations is presented. Programme package is briefly described. Use of the package and calculational procedures for typical core design problems are treated. Comparison of main results with experimental values is presented as part of the verification process. (author) [sl

Williamson’s characterisation of calculativeness as inimical to trust contradicts most sociological trust research. However, a similar argument is found within trust phenomenology. This paper re-investigates Williamson’s argument from the perspective of Løgstrup’s phenomenological theory of trust....... Contrary to Williamson, however, Løgstrup’s contention is that trust, not calculativeness, is the default attitude and only when suspicion is awoken does trust falter. The paper argues that while Williamson’s distinction between calculativeness and trust is supported by phenomenology, the analysis needs...... into consideration that people often engage in interaction on the basis of familiarity rather than calculation. Finally, the institutionally multi-layered character of social interaction means that trust and calculativeness cannot a priori be separated into non-market and market relations. Rather, it is reasonable...

This design calculation revises and updates the previous criticality evaluation for the canister handling, transfer and staging operations to be performed in the Canister Handling Facility (CHF) documented in BSC [Bechtel SAIC Company] 2004 [DIRS 167614]. The purpose of the calculation is to demonstrate that the handling operations of canisters performed in the CHF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Canister Handling Facility Description Document'' (BSC 2004 [DIRS 168992], Sections 3.1.1.3.4.13 and 3.2.3). Specific scope of work contained in this activity consists of updating the Category 1 and 2 event sequence evaluations as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004 [DIRS 167268], Section 7). The CHF is limited in throughput capacity to handling sealed U.S. Department of Energy (DOE) spent nuclear fuel (SNF) and high-level radioactive waste (HLW) canisters, defense high-level radioactive waste (DHLW), naval canisters, multicanister overpacks (MCOs), vertical dual-purpose canisters (DPCs), and multipurpose canisters (MPCs) (if and when they become available) (BSC 2004 [DIRS 168992], p. 1-1). It should be noted that the design and safety analyses of the naval canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. In addition, this calculation is valid for the current design of the CHF and may not reflect the ongoing design evolution of the facility

Point isotopic depletion methods are used to develop spatially dependent fission product and heavy metal inventories for the TMI-2 core. Burnup data from 1239 fuel nodes (177 elements, 7 axial nodes per element) are utilized to preserve the core axial and radial power distributions. A full-core inventory is calculated utilizing 12 fuel groups (four burnup ranges for each of three initial enrichments). Calculated isotopic ratios are also presented as a function of burnup for selected nuclides. Specific applications of the isotopic ratio data include correlation of fuel debris samples with core location and estimates of fission product release fractions. 24 figs., 25 tabs.

A program for molecular calculations with B functions is reported and its performance is analyzed. All the one- and two-center integrals and the three-center nuclear attraction integrals are computed by direct procedures, using previously developed algorithms. The three- and four-center electron repulsion integrals are computed by means of Gaussian expansions of the B functions. A new procedure for obtaining these expansions is also reported. Some results on full molecular calculations are included to show the capabilities of the program and the quality of the B functions to represent the electronic functions in molecules

Nutrition is believed to be a primary contributor in regulating gene expression by affecting epigenetic pathways such as DNA methylation and histone modification. Resveratrol and pterostilbene are phytoalexins produced by plants as part of their defense system. These two bioactive compounds when used alone have been shown to alter genetic and epigenetic profiles of tumor cells, but the concentrations employed in various studies often far exceed physiologically achievable doses. Triple-negative breast cancer (TNBC) is an often fatal condition that may be prevented or treated through novel dietary-based approaches. HCC1806 and MDA-MB-157 breast cancer cells were used as TNBC cell lines in this study. MCF10A cells were used as control breast epithelial cells to determine the safety of this dietary regimen. CompuSyn software was used to determine the combination index (CI) for drug combinations. Combinatorial resveratrol and pterostilbene administered at close to physiologically relevant doses resulted in synergistic (CI <1) growth inhibition of TNBCs. SIRT1, a type III histone deacetylase (HDAC), was down-regulated in response to this combinatorial treatment. We further explored the effects of this novel combinatorial approach on DNA damage response by monitoring γ-H2AX and telomerase expression. With combination of these two compounds there was a significant decrease in these two proteins which might further resulted in significant growth inhibition, apoptosis and cell cycle arrest in HCC1806 and MDA-MB-157 breast cancer cells, while there was no significant effect on cellular viability, colony forming potential, morphology or apoptosis in control MCF10A breast epithelial cells. SIRT1 knockdown reproduced the effects of combinatorial resveratrol and pterostilbene-induced SIRT1 down-regulation through inhibition of both telomerase activity and γ-H2AX expression in HCC1806 breast cancer cells. As a part of the repair mechanisms and role of SIRT1 in recruiting DNMTs

Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it

Molecular descriptors are widely employed to present molecular characteristics in cheminformatics. Various molecular-descriptor-calculation software programs have been developed. However, users of those programs must contend with several issues, including software bugs, insufficient update frequencies, and software licensing constraints. To address these issues, we propose Mordred, a developed descriptor-calculation software application that can calculate more than 1800 two- and three-dimensional descriptors. It is freely available via GitHub. Mordred can be easily installed and used in the command line interface, as a web application, or as a high-flexibility Python package on all major platforms (Windows, Linux, and macOS). Performance benchmark results show that Mordred is at least twice as fast as the well-known PaDEL-Descriptor and it can calculate descriptors for large molecules, which cannot be accomplished by other software. Owing to its good performance, convenience, number of descriptors, and a lax licensing constraint, Mordred is a promising choice of molecular descriptor calculation software that can be utilized for cheminformatics studies, such as those on quantitative structure-property relationships.

U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...

Jason Berner presents EPA’s National Stormwater Calculator developed to help support local, state and national stormwater management objectives and regulatory efforts to reduce runoff using green infrastructure practices as low impact development controls.

Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,

We present a fast calculation of the electromagnetic field near the focus of an objective with a high numerical aperture (NA). Instead of direct integration, the vectorial Debye diffraction integral is evaluated with the fast Fourier transform for calculating the electromagnetic field in the entire focal region. We generalize this concept with the chirp z transform for obtaining a flexible sampling grid and an additional gain in computation speed. Under ...

On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

The Physical Data Group of the Computational Physics Division of the Lawrence Livermore National Laboratory has as its principal responsibility the development and maintenance of those data that are related to nuclear reaction processes and are needed for Laboratory programs. Among these are the Magnetic Fusion Energy and the Inertial Confinement Fusion programs. To this end, we have developed and maintain a collection of data files or libraries. These include: files of experimental data of neutron induced reactions; an annotated bibliography of literature related to charged particle induced reactions with light nuclei; and four main libraries of evaluated data. We also maintain files of calculational constants developed from the evaluated libraries for use by Laboratory computer codes. The data used for fusion calculations are usually these calculational constants, but since they are derived by prescribed manipulation of evaluated data this discussion will describe the evaluated libraries

Full Text Available The paper presents formulas used to calculate critical temperatures of structural steels. Equations that allow calculating temperatures Ac1, Ac3, Ms and Bs were elaborated based on the chemical composition of steel. To elaborate the equations the multiple regression method was used. Particular attention was paid to the collection of experimental data which was required to calculate regression coefficients, including preparation of data for calculation. The empirical data set included more than 500 chemical compositions of structural steel and has been prepared based on information available in literature on the subject.

To compensate for thermal expansion the LHC ring has to accommodate about 2500 bellows which, together with beam position monitors, are the main contributors to the LHC broad-band impedance budget. In order to reduce this impedance to an acceptable value the bellows have to be shielded. In this paper we compare different designs proposed for the bellows and calculate their transverse and longitudinal wakefields and impedances. Owing to the 3D geometry of the bellows, the code MAFIA was used for the wakefield calculations; when possible the MAFIA results were compared to those obtained with ABCI. The results presented in this paper indicate that the latest bellows design, in which shielding is provided by sprung fingers which can slide along the beam screen, has impedances smaller tha those previously estimated according to a rather conservative scaling of SSC calculations and LEP measurements. Several failure modes, such as missing fingers and imperfect RF contact, have also been studied.

programs can give different results. This can be due to restrictions in the program itself and/or be due to the skills of the persons setting up the models. This is crucial as daylight calculations are used to document that the demands and recommendations to daylight levels outlined by building authorities....... The aim of the project was to obtain a better understanding of what daylight calculations show and also to gain knowledge of how the different daylight simulation programs perform compared with each other. Furthermore the aim was to provide knowledge of how to build up the 3D models that were...

Accurate burnup prediction is a key item for design and operation of a power reactor. It should supply information on isotopic changes at each point in the reactor core and the consequences of these changes on the reactivity, power distribution, kinetic characters, control rod patterns, fuel cycles and operating strategy. A basic stage in the burnup prediction is the lattice cell burnup calculation. This series of lectures attempts to give a review of the general principles and calculational methods developed and applied in this area of burnup physics

and energy production patterns are simulated using data from countries with similar environmental conditions but do not use geothermal or hydropower to the same extent as Iceland. Because of the rapid shift towards renewable energy and exclusion of external energy provision, the country is considered...

ALICE is one of the four main particle detectors located around the LHC accelerator at CERN. It is particularly designed to study the physics of the quark-gluon plasma by means of nucleus--nucleus collisions at center-of-mass energies up to 5.5 TeV per nucleon pair. A Time-Projection Chamber (TPC) was chosen to be its central-sub-detector due to its low mass properties and its capabilities to provide a robust and accurate Particle Identification even within ultra-high multiplicity environments (up to 8000 tracks per unit of eta). To achieve the required physics performance, the space point resolution of the TPC must be in the order of 0.2 mm. Due to its gigantic size of 5~m in diameter and 5~m in length, corrections for static as well as dynamic effects are indispensable in order to accomplish the design goal. The research presented covers all major issues relevant for the final calibration and therefore the enhancement of the TPC performance in terms of resolution. The main focus was to distinguish between t...

We describe a numerical method for calculating the magnetohydrodynamic (MHD) spectrum of one-dimensional equilibria with Bow. Due to a general formulation, the spectrum for two different equilibrium geometries, viz. a plane slab and a cylinder, can be investigated. The linearised equations are

The differences between human and computing languages are recalled. It is argued that they are to some extent structured in antagonistic ways. Languages in structural calculation, in the past, present, and future, are considered. The contribution of artificial intelligence is stressed [fr

A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

Full Text Available Humid air is an unavoidable feature of mining atmosphere, which plays a significant role in defining the climate conditions as well as permitted circumstances for normal mining work. Saturated humid air prevents heat conduction from the human body by means of evaporation. Consequently, it is of primary interest in the mining practice to establish the relative air humidity either by means of direct or indirect methods. Percentage of water in the surrounding air may be determined in various procedures including tables, diagrams or particular calculations, where each technique has its specific advantages and disadvantages. Classical calculation is done according to Sprung's formula, in which case partial steam pressure should also be taken from the steam table. The new method without the use of diagram or tables, established on the functional relation of pressure and temperature on saturated line, is presented here for the first time (the paper is published in Croatian.

The LANSCE target operates at a beam current of 30 microamps. We present here the results of the finite-element calculations for the temperatures and stresses in the present target operated at 100 microamps. The calculations were run using the ABAQUS finite-element code. All finite-element codes require as input both the boundary conditions for the material being heated, and such material properties as the thermal conductivity, specific heat, and the elastic modulus. For the LANSCE target, the boundary conditions involve knowing the power deposition from the beam, and the heat transfer coefficients between the tungsten-alloy cylinder and the cooling water. We believe that these numbers are quite well established. 5 refs., 6 figs

Interpolation error is a major source of uncertainty in the calibration of standard platinum resistance thermometer (SPRT) in the subranges of the International Temperature Scale of 1990 (ITS-90). This interpolation error arises because the interpolation equations prescribed by the ITS-90 cannot perfectly accommodate all the SPRTs natural variations in the resistance-temperature behavior, and generates different forms of non-uniqueness. This paper investigates the type 3 non-uniqueness for fourteen SPRTs of five different manufacturers calibrated over the water-zinc subrange and demonstrates the use of the method of divided differences for calculating the interpolation error. The calculated maximum standard deviation of 0.25 mK (near 100°C) is similar to that observed in previous studies.

CAMEA is a novel instrument concept, thus the performance has not been explored. Furthermore it is a complex instrument using many analyser arrays in a wide angular range. The performance of the instrument has been studied by use of three approaches: McStas simulations, analytical calculations, and prototyping. Due to the complexity of the instrument all of the previously mentioned methods can have faults misleading us during the instrument development. We use Monte Carlo and analytical model...

A Chemical Nuclear Reconnaissance System (CNRS) has been developed by the British Ministry of Defence to make chemical and radiation measurements on contaminated terrain using appropriate sensors and recording equipment installed in a land rover. A research programme is under way to develop and validate a predictive capability to calculate the build-up of contamination on the vehicle, radiation detector performance and dose rates to the occupants of the vehicle. This paper describes the geometric model of the vehicle and the methodology used for calculations of detector response. Calculated dose rates obtained using the MCBEND Monte Carlo radiation transport computer code in adjoint mode are presented. These address the transient response of the detectors as the vehicle passes through a contaminated area. Calculated dose rates were found to agree with the measured data to be within the experimental uncertainties, thus giving confidence in the shielding model of the vehicle and its application to other scenarios. (authors)

... field testing, you may calculate the ratio of total mass to total work, where these individual values... negative work rate values in the integration to calculate total work from that work path. Some work paths may result in a negative total work. Include negative total work values from any work path in the...

This report documents the technical evaluation and review of NRC Safety Topic VI-10.A, associated with the electrical, instrumentation, and control portions of the testing of reactor trip systems and engineered safety features includingresponse time for the Dresden II nuclear power plant, using current licensing criteria

An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

The Nuclear Regulatory Commission has stated a hierarchy of safety goals with the qualitative safety goals as Level I of the hierarchy, backed up by the quantitative health objectives as Level II and the large release guideline as Level III. The large release guideline has been stated in qualitative terms as a magnitude of release of the core inventory whose frequency should not exceed 10 -6 per reactor year. However, the Commission did not provide a quantitative specification of a large release. This report describes various specifications of a large release and focuses, in particular, on an examination of releases which have a potential to lead to one prompt fatality in the mean. The basic information required to set up the calculations was derived from the simplified source terms which were obtained from approximations of the NUREG-1150 source terms. Since the calculation of consequences is affected by a large number of assumptions, a generic site with a (conservatively determined) population density and meteorology was specified. At this site, various emergency responses (including no response) were assumed based on information derived from earlier studies. For each of the emergency response assumptions, a set of calculations were performed with the simplified source terms; these included adjustments to the source terms, such as the timing of the release, the core inventory, and the release fractions of different radionuclides, to arrive at a result of one mean prompt fatality in each case. Each of the source terms, so defined, has the potential to be a candidate for a large release. The calculations show that there are many possible candidate source terms for a large release depending on the characteristics which are felt to be important

The theory of Point Kernel applied to a source uniformelly distributed in a cylindrical geometry was utilized to estimated the Cs-137 content of each package of radioactive waste collected. The Taylor equation was employed to calculate the build-up factor and the Green function G was adjusted by means of a least square method. The theory also takes into account factors such as aditional shielding, heterogeneity and humidity of the medium as well as associated uncertanties of the parameters envolved. (author) [pt

Calculations in Furnace Technology presents the theoretical and practical aspects of furnace technology. This book provides information pertinent to the development, application, and efficiency of furnace technology. Organized into eight chapters, this book begins with an overview of the exothermic reactions that occur when carbon, hydrogen, and sulfur are burned to release the energy available in the fuel. This text then evaluates the efficiencies to measure the quantity of fuel used, of flue gases leaving the plant, of air entering, and the heat lost to the surroundings. Other chapters consi

The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase, use and disposal of electronics.The EEBC estimates the environmental and economic benefits of: Purchasing Electronic Product Environmental Assessment Tool (EPEAT)-registered products; Enabling power management features on computers and monitors above default percentages; Extending the life of equipment beyond baseline values; Reusing computers, monitors and cell phones; and Recycling computers, monitors, cell phones and loads of mixed electronic products.The EEBC may be downloaded as a Microsoft Excel spreadsheet.See https://www.federalelectronicschallenge.net/resources/bencalc.htm for more details.

Several Monte Carlo techniques are compared in the transport of neutrons of different source energies through two different deep-penetration problems each with two parts. The first problem involves transmission through a 200-cm concrete slab. The second problem is a 90 0 bent pipe jacketed by concrete. In one case the pipe is void, and in the other it is filled with liquid sodium. Calculations are made with two different Los Alamos Monte Carlo codes: the continuous-energy code MCNP and the multigroup code MCMG

The correlated proton particle-neutron hole spectrum is calculated for N>Z nuclei using a Skyrme type interaction and the response function method. The basis of the calculation is a complete one particle-one hole space with the continuum included. As a result the distribution of the isovector monopole strength in the analog nucleus is obtained. This distribution has a narrow peak which corresponds to the isobaric analog resonance and at higher energies a broad peak which is the isovector monopole resonance. The coupling between these two states is inherent in the calculation

Digital Calculations of Engine Cycles is a collection of seven papers which were presented before technical meetings of the Society of Automotive Engineers during 1962 and 1963. The papers cover the spectrum of the subject of engine cycle events, ranging from an examination of composition and properties of the working fluid to simulation of the pressure-time events in the combustion chamber. The volume has been organized to present the material in a logical sequence. The first two chapters are concerned with the equilibrium states of the working fluid. These include the concentrations of var

Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

This chapter deals with the radiotherapy and cytotoxic chemotherapy of the malignant lymphomas. Included within this group are Hodgkin's disease, non-Hodgkin's lymphoma, mycosis fungoides, and chronic lymphatic leukaemia. A further section deals with the myeloproliferative disorders, including granulocytic leukaemia, polycythaemia vera, and primary thrombocythaemia. Excluded are myeloma and reticulum cell sarcoma of bone and acute leukaemia. With regard to Hodgkin's disease, the past 25 years have seen general recognition of the curative potential of radiotherapy, at least in the local stages, and, more recently, awareness of the ability to achieve long-term survival after combination chemotherapy in generalised or in recurrent disease. At the same time the importance of staging has become appreciated and the introduction of procedures such as lymphography, staging laparotomy, and computer tomography (CT) has enormously increased its reliability. Advances have not been so dramatic in the complex group of non-Hodgkins's lymphomas, but are still very real

The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the criticality safety results to support the preliminary design of the Aging

Food consumption may account for upwards of 15% of U.S. per capita greenhouse gas emissions. Online carbon calculators can help consumers prioritize among dietary behaviors to minimize personal 'carbon footprints', leveraging against emissions-intensive industry practices. We reviewed the fitness of selected carbon calculators for measuring and communicating indirect GHG emissions from food consumption. Calculators were evaluated based on the scope of user behaviors accounted for, data sources, transparency of methods, consistency with prior data and effectiveness of communication. We found food consumption was under-represented (25%) among general environmental impact calculators (n = 83). We identified eight carbon calculators that accounted for food consumption and included U.S. users among the target audience. Among these, meat and dairy consumption was appropriately highlighted as the primary diet-related contributor to emissions. Opportunities exist to improve upon these tools, including: expanding the scope of behaviors included under calculations; improving communication, in part by emphasizing the ecological and public health co-benefits of less emissions-intensive diets; and adopting more robust, transparent methodologies, particularly where calculators produce questionable emissions estimates. Further, all calculators could benefit from more comprehensive data on the U.S. food system. These advancements may better equip these tools for effectively guiding audiences toward ecologically responsible dietary choices. (author)

With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.

Smoke from raging fires produced in the aftermath of a major nuclear exchange has been predicted to cause large decreases in surface temperatures. However, the extent of the decrease and even the sign of the temperature change, depend on how the smoke is distributed with altitude. We present a model capable of evaluating the initial distribution of lofted smoke above a massive fire. Calculations are shown for a two-dimensional slab version of the model and a full three-dimensional version. The model has been evaluated by simulating smoke heights for the Hamburg firestorm of 1943 and a smaller scale oil fire which occurred in Long Beach in 1958. Our plume heights for these fires are compared to those predicted by the classical Morton-Taylor-Turner theory for weakly buoyant plumes. We consider the effect of the added buoyancy caused by condensation of water-laden ground level air being carried to high altitude with the convection column as well as the effects of background wind on the calculated smoke plume heights for several fire intensities. We find that the rise height of the plume depends on the assumed background atmospheric conditions as well as the fire intensity. Little smoke is injected into the stratosphere unless the fire is unusually intense, or atmospheric conditions are more unstable than we have assumed. For intense fires significant amounts of water vapor are condensed raising the possibility of early scavenging of smoke particles by precipitation. 26 references, 11 figures

A new model for multigroup transport calculations based on a group-dependent spatial representation has been developed. The multilevel method takes advantage of the orthogonality of the energy and space operators, inherent to the structure of the linear transport equation, to decompose the energy domain into subdomains or levels, i.e., fast, epithermal and thermal, where suitable spatial approximations are used. The aim of the method is to allow for the use of larger mesh spacings at high neutron energies and, therefore, to cut down the computational cost while preserving the overall accuracy. The method can be easily implemented in today's standard transport codes by introducing small modifications in the computation of the multigroup external source. The multilevel model is of special interest for the calculation of media containing high thermal absorbers. A variant of this method, based on a nested, multilevel approximation, has been implemented in the APOLLO-II assembly transport code. Comparisons between the multilevel model and the usual multigroup approximation have been made for a PWR poisoned cell and for a thermal neutron barrier used to feed a molten FBR fuel sample. The results show that significant savings in computational times are obtained with the multilevel approximation. 10 refs

Smoke from raging fires produced in the aftermath of a major nuclear exchange has been predicted to cause large decreases in surface temperatures. However, the extent of the decrease and even the sign of the temperature change, depend on how the smoke is distributed with altitude. We present a model capable of evaluating the initial distribution of lofted smoke above a massive fire. Calculations are shown for a two-dimensional slab version of the model and a full three-dimensional version. The model has been evaluated by simulating smoke heights for the Hamburg firestorm of 1943 and a smaller scale oil fire which occurred in Long Beach in 1958. Our plume heights for these fires are compared to those predicted by the classical Morton-Taylor-Turner theory for weakly buoyant plumes. We consider the effect of the added buoyancy caused by condensation of water-laden ground level air being carried to high altitude with the convection column as well as the effects of background wind on the calculated smoke plume heights for several fire intensities. We find that the rise height of the plume depends on the assumed background atmospheric conditions as well as the fire intensity. Little smoke is injected into the stratosphere unless the fire is unusually intense, or atmospheric conditions are more unstable than we have assumed. For intense fires significant amounts of water vapor are condensed raising the possibility of early scavenging of smoke particles by precipitation. 26 references, 11 figures.

The greatest benefit of including leap year in the calculation is not to increase precision, but to show students that a problem can be solved without such presumption. A birthday problem is analyzed showing that calculating a leap-year birthday probability is not a frivolous computation.

The total penetrability through a three-humped fission barrier including vibrational damping is calculated by using an optical model for fission. The Bondorf's stationary probability current theory is used for transitions among class-1, class-2 and class-3 phases. A method to calculate the partial-transmission coefficients is developed

A code for calculating ultrasonic fields was developed by revising the thermal hydraulics code STEALTH. This code may be used in a wide variety of situations in which a detailed knowledge of a propagating wave field is required. Among the potential uses are interpretation of pulse echo or pitch catch ultrasonic signals in complicated geometries, ultrasonic transducer modeling and characterization; optimization and evaluation of transducer design; optimization and reliability of inspection procedures; investigation of the response of different types of reflectors; flaw modeling; and general theoretical acoustics. The code is described, and its limitations and potential are discussed. A discussion of the required input and of the general procedures for running the code is presented. Three sample problems illustrate the input and the use of the code.

The present invention relates to a probe for determining an electrical property of an area of a surface of a test sample, the probe is intended to be in a specific orientation relative to the test sample. The probe may comprise a supporting body defining a first surface. A plurality of cantilever...... of cantilever arms (12) contacting the surface of the test sample when performing the movement....... arms (12) may extend from the supporting body in co-planar relationship with the first surface. The plurality of cantilever arms (12) may extend substantially parallel to each other and each of the plurality of cantilever arms (12) may include an electrical conductive tip for contacting the area...

Electromagnetic design calculation of the step-by-step magnetic jacking control rod drive mechanism includes magnetic field force calculation and design calculation of magnetomotive force for three electromagnetic iron and their coilds. The basic principle and method of electromagnetic design calculation had been expounded to take the lift magnet and lift coil for example

The objective of this calculation is to determine the structural response of the standard high-level waste (HLW) canister and the canister containing the cans of immobilized plutonium (Pu) (''can-in-canister'' [CIC] throughout this document) subjected to drop DBEs (design basis events) during the handling operation. The evaluated DBE in the former case is 7-m (23-ft) vertical (flat-bottom) drop. In the latter case, two 2-ft (0.61-m) corner (oblique) drops are evaluated in addition to the 7-m vertical drop. These Pu CIC calculations are performed at three different temperatures: room temperature (RT) (20 C ), T = 200 F = 93.3 C , and T = 400 F = 204 C ; in addition to these the calculation characterized by the highest maximum stress intensity is performed at T = 750 F = 399 C as well. The scope of the HLW canister calculation is limited to reporting the calculation results in terms of: stress intensity and effective plastic strain in the canister, directional residual strains at the canister outer surface, and change of canister dimensions. The scope of Pu CIC calculation is limited to reporting the calculation results in terms of stress intensity, and effective plastic strain in the canister. The information provided by the sketches from Reference 26 (Attachments 5.3,5.5,5.8, and 5.9) is that of the potential CIC design considered in this calculation, and all obtained results are valid for this design only. This calculation is associated with the Plutonium Immobilization Project and is performed by the Waste Package Design Section in accordance with Reference 24. It should be noted that the 9-m vertical drop DBE, included in Reference 24, is not included in the objective of this calculation since it did not become a waste acceptance requirement. AP-3.124, ''Calculations'', is used to perform the calculation and develop the document.

The objective of this calculation is to determine the structural response of the standard high-level waste (HLW) canister and the canister containing the cans of immobilized plutonium (Pu) (''can-in-canister'' [CIC] throughout this document) subjected to drop DBEs (design basis events) during the handling operation. The evaluated DBE in the former case is 7-m (23-ft) vertical (flat-bottom) drop. In the latter case, two 2-ft (0.61-m) corner (oblique) drops are evaluated in addition to the 7-m vertical drop. These Pu CIC calculations are performed at three different temperatures: room temperature (RT) (20 C), T = 200 F = 93.3 C , and T = 400 F = 204 C ; in addition to these the calculation characterized by the highest maximum stress intensity is performed at T = 750 F = 399 C as well. The scope of the HLW canister calculation is limited to reporting the calculation results in terms of: stress intensity and effective plastic strain in the canister, directional residual strains at the canister outer surface, and change of canister dimensions. The scope of Pu CIC calculation is limited to reporting the calculation results in terms of stress intensity, and effective plastic strain in the canister. The information provided by the sketches from Reference 26 (Attachments 5.3,5.5,5.8, and 5.9) is that of the potential CIC design considered in this calculation, and all obtained results are valid for this design only. This calculation is associated with the Plutonium Immobilization Project and is performed by the Waste Package Design Section in accordance with Reference 24. It should be noted that the 9-m vertical drop DBE, included in Reference 24, is not included in the objective of this calculation since it did not become a waste acceptance requirement. AP-3.124, ''Calculations'', is used to perform the calculation and develop the document

. The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...

, and is hampered by gaps in environmental exposure data, especially from industrializing countries. For these reasons, a recently calculated environmental BoD of 5.18% of the total DALYs is likely underestimated. We combined and extended cost calculations for exposures to environmental chemicals, including...... neurotoxicants, air pollution, and endocrine disrupting chemicals, where sufficient data were available to determine dose-dependent adverse effects. Environmental exposure information allowed cost estimates for the U.S. and the EU, for OECD countries, though less comprehensive for industrializing countries...... is that they are available for few environmental chemicals and primarily based on mortality and impact and duration of clinical morbidity, while less serious conditions are mostly disregarded. Our economic estimates based on available exposure information and dose-response data on environmental risk factors need to be seen...

The methodology presented in this document was developed to provide a means of calculating the RH ratios to use in developing useful graphic illustrations. The RH equation, as presented in this methodology, is primarily a collection of key factors relevant to understanding the hazards and risks associated with projected risk management activities. The RH equation has the potential for much broader application than generating risk profiles. For example, it can be used to compare one risk management activity with another, instead of just comparing it to a fixed baseline as was done for the risk profiles. If the appropriate source term data are available, it could be used in its non-ratio form to estimate absolute values of the associated hazards. These estimated values of hazard could then be examined to help understand which risk management activities are addressing the higher hazard conditions at a site. Graphics could be generated from these absolute hazard values to compare high-hazard conditions. If the RH equation is used in this manner, care must be taken to specifically define and qualify the estimated absolute hazard values (e.g., identify which factors were considered and which ones tended to drive the hazard estimation)

The detailed study of few-body systems provides one of the most precise tools for studying the dynamics of nuclei. Our research program consists of a careful theoretical study of the nuclear few-body systems. During the past year we have completed several aspects of this program. We have continued our program of using the trinucleon system to investigate the validity of various realistic nucleon-nucleon potentials. Also, the effects of meson-exchange currents in nuclear systems have been studied. Initial calculations using the configuration-space Faddeev equations for nucleon-deuteron scattering have been completed. With modifications to treat relativistic systems, few-body methods can be applied to phenomena that are sensitive to the structure of the individual hadrons. We have completed a review of Relativistic Hamiltonian Dynamics in Nuclear and Particle Physics for Advances in Nuclear Physics. Although it is called a review, it is a large document that contains a significant amount of new research

The 3-dimensional (3-D) calculation of the atmospheric neutrino flux by means of the FLUKA Monte Carlo model is here described in all details, starting from the latest data on primary cosmic ray spectra. The importance of a 3-D calculation and of its consequences have been already debated in a previous paper. Here instead the focus is on the absolute flux. We stress the relevant aspects of the hadronic interaction model of FLUKA in the atmospheric neutrino flux calculation. This model is constructed and maintained so to provide a high degree of accuracy in the description of particle production. The accuracy achieved in the comparison with data from accelerators and cross checked with data on particle production in atmosphere certifies the reliability of shower calculation in atmosphere. The results presented here can be already used for analysis by current experiments on atmospheric neutrinos. However they represent an intermediate step towards a final release, since this calculation does not yet include the...

The lung is a very complex immunologic organ and responds in a variety of ways to inhaled antigens, organic or inorganic materials, infectious or saprophytic agents, fumes, and irritants. There might be airways obstruction, restriction, neither, or both accompanied by inflammatory destruction of the pulmonary interstitium, alveoli, or bronchioles. This review focuses on diseases organized by their predominant immunologic responses, either innate or acquired. Pulmonary innate immune conditions include transfusion-related acute lung injury, World Trade Center cough, and acute respiratory distress syndrome. Adaptive immunity responses involve the systemic and mucosal immune systems, activated lymphocytes, cytokines, and antibodies that produce CD4(+) T(H)1 phenotypes, such as for tuberculosis or acute forms of hypersensitivity pneumonitis, and CD4(+) T(H)2 phenotypes, such as for asthma, Churg-Strauss syndrome, and allergic bronchopulmonary aspergillosis. Copyright 2010 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

The phonon dispersion relations for trigonal selenium have been calculated on the basis of a short range potential field model. Electrostatic long range forces have not been included. The force field is defined in terms of symmetrized coordinates which reflect partly the symmetry of the space group....... With such coordinates a potential energy, calculated with only a diagonal force matrix, is equivalent to one calculated with both off diagonal and diagonal elements when conventional coordinates are used. Another advantage is that often some force constants may be determined directly from frequencies at points of high...

We provide a tutorial introduction to the modern theoretical and computational schemes available to calculate the lattice thermal conductivity in a crystalline dielectric material. While some important topics in thermal transport will not be covered (including thermal boundary resistance, electronic thermal conduction, and thermal rectification), we aim at: (i) framing the calculation of thermal conductivity within the general non-equilibrium thermodynamics theory of transport coefficients, (ii) presenting the microscopic theory of thermal conduction based on the phonon picture and the Boltzmann transport equation, and (iii) outlining the molecular dynamics schemes to calculate heat transport. A comparative and critical addressing of the merits and drawbacks of each approach will be discussed as well.

Consultants who calculate payback provide expertise and a second opinion to back up energy managers' proposals. They can lower the costs of an energy-management investment by making complex comparisons of systems and recommending the best system for a specific application. Examples of payback calculationsinclude simple payback for a school system, a university, and a Disneyland hotel, as well as internal rate of return for a corporate office building and a chain of clothing stores. (DCK)

Broyden's method, widely used in quantum chemistry electronic-structure calculations for the numerical solution of nonlinear equations in many variables, is applied in the context of the nuclear many-body problem. Examples include the unitary gas problem, the nuclear density functional theory with Skyrme functionals, and the nuclear coupled-cluster theory. The stability of the method, its ease of use, and its rapid convergence rates make Broyden's method a tool of choice for large-scale nuclear structure calculations

We present here the methods we used to analyse the characteristic parameters of drift chambers. The algorithms to calculate the electric potential in any point for any drift chamber geometry are presented. We include the description of the programs used to calculate the electric field, the drift paths, the drift velocity and the drift time. The results and the errors are discussed. (Author) 7 refs

The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the

The explanatory correlational research study examined the degree of the relationships between the three elements and teachers' implementation of calculators in the mathematics classroom. The three elements include teachers' attitude toward technology, teachers' instructional use of calculators, and teachers' attitude toward…

Medical apps are widely available, increasingly used by patients and clinicians, and are being actively promoted for use in routine care. However, there is little systematic evidence exploring possible risks associated with apps intended for patient use. Because self-medication errors are a recognized source of avoidable harm, apps that affect medication use, such as dose calculators, deserve particular scrutiny. We explored the accuracy and clinical suitability of apps for calculating medication doses, focusing on insulin calculators for patients with diabetes as a representative use for a prevalent long-term condition. We performed a systematic assessment of all English-language rapid/short-acting insulin dose calculators available for iOS and Android. Searches identified 46 calculators that performed simple mathematical operations using planned carbohydrate intake and measured blood glucose. While 59% (n = 27/46) of apps included a clinical disclaimer, only 30% (n = 14/46) documented the calculation formula. 91% (n = 42/46) lacked numeric input validation, 59% (n = 27/46) allowed calculation when one or more values were missing, 48% (n = 22/46) used ambiguous terminology, 9% (n = 4/46) did not use adequate numeric precision and 4% (n = 2/46) did not store parameters faithfully. 67% (n = 31/46) of apps carried a risk of inappropriate output dose recommendation that either violated basic clinical assumptions (48%, n = 22/46) or did not match a stated formula (14%, n = 3/21) or correctly update in response to changing user inputs (37%, n = 17/46). Only one app, for iOS, was issue-free according to our criteria. No significant differences were observed in issue prevalence by payment model or platform. The majority of insulin dose calculator apps provide no protection against, and may actively contribute to, incorrect or inappropriate dose recommendations that put current users at risk of both catastrophic overdose and more

The sensitivity of non linear responses associated with physical quantities governed by non linear differential systems can be studied using perturbation theory. The equivalence and formal differences between the differential and GPT formalisms are shown and both are used for sensitivity calculations of transient problems in a typical PWR coolant channel. The results obtained are encouraging with respect to the potential of the method for thermalhydraulics calculations normally performed for reactor design and safety analysis. (Author) [pt

.... The vacancy migration energy for tungsten was calculated. The calculated value of 1.73 electron volts, together with experimental data, suggests that vacancies migrate in stage III recovery in tungsten...

This poster will demonstrate how EPA's National Stormwater Calculator works. The National Stormwater Calculator (SWC) estimates the amount of stormwater runoff generated from a site under different development and control scenarios over a long period of historical rainfall. The a...

In this work is described the neutronic calculation line used to design the CAREM reactor.A description of the codes used and the interfaces between the different programs are presented.Both, the normal calculation line and the alternative or verification calculation line are included.The calculation line used to obtain the kinetics parameters (effective delayed-neutron fraction and prompt-neutron lifetime) is also included

The reliability of OMEGA criticality calculations is shown by a comparison with calculations by the validated and widely used Monte Carlo code MCNP. The criticality of 16 assemblies with uranium as fissionable is calculated with the codes MCNP (Version 4A, ENDF/B-V cross sections), MCNP (Version 4B, ENDF/B-VI cross sections), and OMEGA. Identical calculation models are used for the three codes. The results are compared mutually and with the experimental criticality of the assemblies. (orig.)

For anisotropic magnetic material, nonlinear magnetic characteristics of the material are described with magnetization curves for different magnetization directions. The paper presents transient finite element calculation of the magnetic field in the anisotropic magnetic material based on the measured magnetization curves for different magnetization directions. For the verification of the calculation method some results of the calculation are compared with the measurement

The Bushland Reference Evapotranspiration (ET) Calculator was developed at the USDA-ARS Conservation and Production Research Laboratory, Bushland, Texas, for calculating grass and alfalfa reference ET. It uses the ASCE Standardized Reference ET Equation for calculating reference ET at hourly and dai...

Models of stratospheric chemistry have been primarily directed toward an understanding of the behavior of stratospheric ozone. Initially this interest reflected the diagnostic role of ozone in the understanding of atmospheric transport processes. More recently, interest in stratospheric ozone has arisen from concern that human activities might affect the amount of stratospheric ozone, thereby affecting the ultraviolet radiation reaching the earth's surface and perhaps also affecting the climate with various potentially severe consequences for human welfare. This concern has inspired a substantial effort to develop both diagnostic and prognostic models of stratospheric ozone. During the past decade, several chemical agents have been determined to have potentially significant impacts on stratospheric ozone if they are released to the atmosphere in large quantities. These include oxides of nitrogen, oxides of hydrogen, chlorofluorocarbons, bromine compounds, fluorine compounds and carbon dioxide. In order to assess the potential impact of the perturbations caused by these chemicals, mathematical models have been developed to handle the complex coupling between chemical, radiative, and dynamical processes. Basic concepts in stratospheric modeling are reviewed

The purpose of this calculation is to establish appropriate and defensible waste-package radiation source terms for use in repository subsurface shielding design. This calculation supports the shielding design for the waste emplacement and retrieval system, and subsurface facility system. The objective is to identify the limiting waste package and specify its associated source terms including source strengths and energy spectra. Consistent with the Technical Work Plan for Subsurface Design Section FY 01 Work Activities (CRWMS M and O 2001, p. 15), the scope of work includes the following: (1) Review source terms generated by the Waste Package Department (WPD) for various waste forms and waste package types, and compile them for shielding-specific applications. (2) Determine acceptable waste package specific source terms for use in subsurface shielding design, using a reasonable and defensible methodology that is not unduly conservative. This calculation is associated with the engineering and design activity for the waste emplacement and retrieval system, and subsurface facility system. The technical work plan for this calculation is provided in CRWMS M and O 2001. Development and performance of this calculation conforms to the procedure, AP-3.12Q, Calculations

Bonner Spheres Spectrometry in its high-energy extended version is an established method to quantify neutrons at a wide energy range from several meV up to more than 1 GeV. In order to allow for quantitative measurements, the responses of the various spheres used in a Bonner Sphere Spectrometer (BSS) are usually simulated by Monte Carlo (MC) codes over the neutron energy range of interest. Because above 20 MeV experimental cross section data are scarce, intra-nuclear cascade (INC) and evaporation models are applied in these MC codes. It was suspected that this lack of data above 20 MeV may translate to differences in simulated BSS response functions depending on the MC code and nuclear models used, which in turn may add to the uncertainty involved in Bonner Sphere Spectrometry, in particular for neutron energies above 20 MeV. In order to investigate this issue in a systematic way, EURADOS (European Radiation Dosimetry Group) initiated an exercise where six groups having experience in neutron transport calcula...

Several methods of bone marrow dose calculation for photon irradiation were analised. After a critical analysis, the author proposes the adoption, by the Instituto de Radioprotecao e Dosimetria/CNEN, of Rosenstein's method for dose calculations in Radiodiagnostic examinations and Kramer's method in case of occupational irradiation. It was verified by Eckerman and Simpson that for monoenergetic gamma emitters uniformly distributed within the bone mineral of the skeleton the dose in the bone surface can be several times higher than dose in skeleton. In this way, is also proposed the Calculation of tissue-air ratios for bone surfaces in some irradiation geometries and photon energies to be included in the Rosenstein's method for organ dose calculation in Radiodiagnostic examinations. (Author) [pt

This thesis provides the first approach of a systematic inclusion of gauge corrections to leading order to the ansatz of thermal leptogenesis. We have derived a complete expression for the integrated lepton number matrix including all resummations needed. For this purpose, a new class of diagram has been invented, namely the cylindrical diagram, which allows diverse investigations into the topic of leptogenesis such as the case of resonant leptogenesis. After a brief introduction of the topic of the baryon asymmetry in the universe and a discussion of its most promising solutions as well as their advantages and disadvantages, we have presented our framework of thermal leptogenesis. An effective model was described as well as the associated Feynman rules. The basis for using nonequilibrium quantum field theory has been built in chapter 3. At first, the main definitions have been presented for equilibrium thermal field theory, afterwards we have discussed the Kadanoff-Baym equations for systems out of equilibrium using the example of the Majorana neutrino. The equations have also been solved in the context of leptogenesis in chapter 4. Since gauge corrections play a crucial role throughout this thesis, we have also repeated the naive ansatz by replacing the free equilibrium propagator by propagators including thermal damping rates due to the Standard Model damping widths for lepton and Higgs fields. It is shown that this leads to a comparable result to the solutions of the Boltzmann equations for thermal leptogenesis. Thus it becomes obvious that Standard Model corrections are not negligible for thermal leptogenesis and therefore need to be included systematically from first principles. In order to achieve this we have started discussing the calculation of ladder rung diagrams for Majorana neutrinos using the HTL and the CTL approach in chapter 5. All gauge corrections are included in this framework and thus it has become the basis for the following considerations

A detectable extraterrestrial civilization can be modeled as a series of successive regimes over time each of which is detectable for a certain proportion of its lifecycle. This methodology can be utilized to produce an estimate for L. Potential components of L include quantity of fossil fuel reserves, solar energy potential, quantity of regimes over time, lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and downtime between regimes. Relationships between these components provide a means of calculating the lifetime of communicative species in a detectable state, L. An example of how these factors interact is provided, utilizing values that are reasonable given known astronomical data for components such as solar energy potential while existing knowledge about the terrestrial case is used as a baseline for other components including fossil fuel reserves, quantity of regimes over time, and lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and gaps of time between regimes due to recovery from catastrophic war or resource exhaustion. A range of values is calculated for L when parameters are established for each component so as to determine the lowest and highest values of L. roadmap for SETI research at the SETI Institute for the next few decades. Three different approaches were identified. 1) Continue the radio search: build an affordable array incorporating consumer market technologies, expand the search frequency, and increase the target list to 100,000 stars. This array will also serve as a technology demonstration and enable the international radio astronomy community to realize an array that is a hundred times larger and capable (among other things) of searching a million stars. 2) Begin searches for very fast optical pulses from a million stars. 3) As Moore's Law delivers increased computational capacity, build an omni-directional sky survey array capable of detecting strong, transient

Full Text Available An increased use of variable generation technologies such as wind power and photovoltaic generation can have important effects on system frequency performance during normal operation as well as contingencies. The main reasons are the operational principles and inherent characteristics of these power plants like operation at maximum power point and no inertial response during power system imbalances. This has led to new challenges for Transmission System Operators in terms of ensuring system security during contingencies. In this context, this paper proposes a Robust Unit Commitment including a set of additional frequency stability constraints. To do this, a simplified dynamic model of the initial system frequency response is used in combination with historical frequency nadir data during contingencies. The proposed approach is especially suitable for power systems with cost-based economic dispatch like those in most Latin American countries. The study is done considering the Northern Interconnected System of Chile, a 50-Hz medium size isolated power system. The results obtained were validated by means of dynamic simulations of different system contingencies.

Australia has recently joined the CLIC collaboration: the enlargement will bring new expertise and resources to the project, and is especially welcome in the wake of CERN budget redistributions following the recent adoption of the Medium Term Plan. The countries involved in CLIC collaboration With the signing of a Memorandum of Understanding on 26 August 2010, the ACAS network (Australian Collaboration for Accelerator Science) became the 40th member of in the multilateral CLIC collaboration making Australia the 22nd country to join the collaboration. “The new MoU was signed by the ACAS network, which includes the Australian Synchrotron and the University of Melbourne”, explains Jean-Pierre Delahaye, CLIC Study Leader. “Thanks to their expertise, the Australian institutes will contribute greatly to the CLIC damping rings and the two-beam test modules." Institutes from any country wishing to join the CLIC collaboration are invited to assume responsibility o...

Ultrasound (US) has great potential as an outcome in rheumatoid arthritis trials for detecting bone erosions, synovitis, tendon disease, and enthesopathy. It has a number of distinct advantages over magnetic resonance imaging, including good patient tolerability and ability to scan multiple joints...... in a short period of time. However, there are scarce data regarding its validity, reproducibility, and responsiveness to change, making interpretation and comparison of studies difficult. In particular, there are limited data describing standardized scanning methodology and standardized definitions of US...... pathologies. This article presents the first report from the OMERACT ultrasound special interest group, which has compared US against the criteria of the OMERACT filter. Also proposed for the first time are consensus US definitions for common pathological lesions seen in patients with inflammatory arthritis....

The calculation of costs plays an increasingly large role in the decision-making processes of public sector human service organizations. This has brought scholars of management accounting to investigate the relationship between caring professions and demands to make economic entities of the service...... on the idea that professions are hybrids by introducing the notion of qualculation as an entry point to investigate decision-making in child protection work as an extreme case of calculating on the basis of other elements than quantitative numbers. The analysis reveals that it takes both calculation...... arrangements that affords calculations of both qualitative measures of the individual case and distant accounting numbers....

...) Cash working capital. The average amount of investor-supplied capital needed to provide funds for a carrier's day-to-day interstate operations. Class A carriers may calculate a cash working capital... study or using the formula in paragraph (e) of this section, may calculate the cash working capital...

This supporting document has been prepared to make the Master Calculation List readily retrievable. The list gives the status of the calculation (as-built, not used, applied, etc.), the calculation title, its originator, comments, and report number under which it was issued. Tank 241-C-106 has been included on the High Heat Load Watch List

Species are experiencing a suite of novel stressors from anthropogenic activities that have impacts at multiple scales. Vulnerability assessment is one tool to evaluate the likely impacts that these stressors pose to species so that high-vulnerability cases can be identified and prioritized for monitoring, protection, or mitigation. Commonly used semi-quantitative methods lack a framework to explicitly account for differences in exposure to stressors and organism responses across life stages. Here we propose a modification to commonly used spatial vulnerability assessment methods that includes such an approach, using ocean acidification in the California Current as an illustrative case study. Life stage considerations were included by assessing vulnerability of each life stage to ocean acidification and were used to estimate population vulnerability in two ways. We set population vulnerability equal to: (1) the maximum stage vulnerability and (2) a weighted mean across all stages, with weights calculated using Lefkovitch matrix models. Vulnerability was found to vary across life stages for the six species explored in this case study: two krill-Euphausia pacifica and Thysanoessa spinifera, pteropod-Limacina helicina, pink shrimp-Pandalus jordani, Dungeness crab-Metacarcinus magister and Pacific hake-Merluccius productus. The maximum vulnerability estimates ranged from larval to subadult and adult stages with no consistent stage having maximum vulnerability across species. Similarly, integrated vulnerability metrics varied greatly across species. A comparison showed that some species had vulnerabilities that were similar between the two metrics, while other species' vulnerabilities varied substantially between the two metrics. These differences primarily resulted from cases where the most vulnerable stage had a low relative weight. We compare these methods and explore circumstances where each method may be appropriate.

Self-consistent relativistic calculations of the electronic properties for seven actinides (Ac-Am) have been performed using the linear muffin-tin orbitals method within the atomic-sphere approximation. Exchange and correlation were included in the local spin-density scheme. The theory explains...

The computer code GENMOD was created to calculate the retention and excretion, and the integrated retention for selected radionuclides under a variety of exposure conditions. Since the creation of GENMOD new models have been developed and interfaced to GENMOD. This report describes the models now included in GENMOD, the dosimetry factors database, and gives a brief description of the GENMOD program

We show that the two first fixed J moments of the Hamiltonian operator can be easily calculated over the whole fixed particle number shell model space as well as over configurations. The method may be extended to higher moments of H and to include the isotopic spin T

Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab

The CORD-2 package is designed to provide a modern, independent calculational tool for reactor core calculations. It provides options that are essential for modeling the advanced features of fuel assemblies. Its development is part of a wider effort to establish country's own expertise in nuclear design and safety analysis. The package provides not only the calculational modules, but also the data management support facilities. It has been implemented on VAX/VMS and on PC/DOS, but extension to other systems is quite straightforward. The main components and the calculational methods are briefly described. The results of the validation programme are presented. They include the comparison of the calculated results with the measured values of ten cycles of the Krsko nuclear power plant and for the IAEA test case Almaraz, with special emphasis on the first cores at hot-zero power conditions. The results of the validation programme shows that CORD-2 is applicable for design level PWR core calculations. (authors). 9 refs., 4 figs., 3 tabs

One of the most important things how to predict the service life of different products is always connected with the choice of adequate method. With the development of production technologies and measuring devices and with ever increasing precision one can get the appropriate data to be used in analytic calculations. Historically one can find several theoretical wear calculation methods but still there are no exact wear calculation model that could be applied to all cases of wear processes because of difficulties connected with a variety of parameters that are involved in wear process of two or several surfaces. Analysing the wear prediction theories that could be classified into definite groups one can state that each of them has shortcomings that might impact the results thus making unnecessary theoretical calculations. The offered wear calculation method is based on the theories of different branches of science. It includes the description of 3D surface micro-topography using standardized roughness parameters, explains the regularities of particle separation from the material in the wear process using fatigue theory and takes into account material’s physical and mechanical characteristics and definite conditions of product’s working time. The proposed wear calculation model could be of value for prediction of the exploitation time for sliding friction pairs thus allowing the best technologies to be chosen for many mechanical details.

Full Text Available Background and objectivesThe calculated panel reactive of antibodies (cPRAs necessary for kidney donor-pair exchange and highly sensitized programs are estimated using different panel reactive antibody (PRA calculators based on big enough samples in Eurotransplant (EUTR, United Network for Organ Sharing (UNOS, and Canadian Transplant Registry (CTR websites. However, those calculators can vary depending on the ethnic they are applied. Here, we develop a PRA calculator used in the Spanish Program of Transplant Access for Highly Sensitized patients (PATHI and validate it with EUTR, UNOS, and CTR calculators.MethodsThe anti-human leukocyte antigen (HLA antibody profile of 42 sensitized patients on waiting list was defined, and cPRA was calculated with different PRA calculators.ResultsDespite different allelic frequencies derived from population differences in donor panel from each calculator, no differences in cPRA between the four calculators were observed. The PATHI calculatorincludes anti-DQA1 antibody profiles in cPRA calculation; however, no improvement in total cPRA calculation of highly sensitized patients was demonstrated.Interpretation and conclusionThe PATHI calculator provides cPRA results comparable with those from EUTR, UNOS, and CTR calculators and serves as a tool to develop valid calculators in geographical and ethnic areas different from Europe, USA, and Canada.

Neutron displacement damage-energy cross sections have been calculated for 41 isotopes in the energy range from 10 -10 to 20 MeV. Calculations were performed on a 100-point energy grid using nuclear cross sections from ENDF/B-V and the DISCS computer code. Elastic scattering is treated exactly including angular distributions from ENDF/B-V. Inelastic scattering calculations consider both discrete and continuous nuclear level distributions. Multiple (n,xn) reactions use a Monte Carlo technique to derive the recoil distributions. The (n,d) and (n,t) reactions are treated as (n,p) and (n, 3 He) as (n, 4 He). The (n,γ) reaction and subsequent β-decay are also included, using a new treatment of γ-γ coincidences, angular correlations, β-neutrino correlations, and the incident neutron energy. The Lindhard model was used to compute the energy available for nuclear displacement at each recoil energy. The SPECTER computer code has been developed to simplify damage calculations. The user need only specify a neutron energy spectrum. SPECTER will then calculate spectral-averaged displacements, recoil spectra, gas production, and total damage energy (Kerma). The SPECTER computer code package is readily accessible to the fusion community via the National Magnetic Fusion Energy Computer Center (NMFECC) at Lawrence Livermore National laboratory

... it takes about 2 hours for the adult body to completely break down a single drink. Do not drive after drinking. For comparison, regular beer is 5% alcohol by volume (alc/vol), table wine is about 12% alc/vol, and straight 80-proof distilled spirits is 40% alc/vol. The percent ...

SRAC code system is utilized for core burn-up calculation of JRR-3. SRAC code system includescalculation modules such as PIJ, PIJBURN, ANISN and CITATION for making effective cross section and calculation modules such as COREBN and HIST for core burn-up calculation. As for calculation method for JRR-3, PIJBURN (Cell burn-up calculation module) is used for making effective cross section of fuel region at each burn-up step. PIJ, ANISN and CITATION are used for making effective cross section of non-fuel region. COREBN and HIST is used for core burn-up calculation and fuel management. This paper presents details of NRR-3 core burn-up calculation. FNCA Participating countries are expected to carry out core burn-up calculation of domestic research reactor by SRAC code system by utilizing the information of this paper. (author)

This brief article describes risk calculators that are based on populations of Pakistani ethnicity, and can be used for risk stratification in Pakistani and other South Asian clinics. Covering the QRISK, QKidney, QThrombosis, QFracture and QCancer risk calculators, it uses examples to explain how these can be utilized for risk stratification.

Although methods for using ordinary least squares regression computer programs to calculate a ridge regression are available, the calculation of a stepwise ridge regression requires a special purpose algorithm and computer program. The correct stepwise ridge regression procedure is given, and a parallel FORTRAN computer program is described.…

A criticality scoping calculation was performed for a dissolver designed to dissolve HTGR fuels. The calculation shows the dissolver to go critical at an H/x (hydrogen-to-fuel ratio) of about 34 and peak with a k-effective of 1.18 at an H/x of about 180

Described are the procedures and techniques employed by B and W in core design analyses of power peaking, control rod worths, and reactivity coefficients. Major emphasis has been placed on current calculational tools and the most frequently performed calculations over the operating power range

Since late 1997, researchers at the Hanford Site have been engaged in the Groundwater Protection Project (formerly, the Groundwater/Vadose Zone Project), developing a suite of integrated physical and environmental models and supporting data to trace the complex path of Hanford legacy contaminants through the environment for the next thousand years, and to estimate corresponding environmental, human health, economic, and cultural risks. The linked set of models and data is called the System Assessment Capability (SAC). The risk mechanism for economics consists of ''impact triggers'' (sequences of physical and human behavior changes in response to, or resulting from, human health or ecological risks), and processes by which particular trigger mechanisms induce impacts. Economic impacts stimulated by the trigger mechanisms may take a variety of forms, including changes in either costs or revenues for economic sectors associated with the affected resource or activity. An existing local economic impact model was adapted to calculate the resulting impacts on output, employment, and labor income in the local economy (the Tri-Cities Economic Risk Model or TCERM). The SAC researchers ran a test suite of 25 realization scenarios for future contamination of the Columbia River after site closure for a small subset of the radionuclides and hazardous chemicals known to be present in the environment at the Hanford Site. These scenarios of potential future river contamination were analyzed in TCERM. Although the TCERM model is sensitive to river contamination under a reasonable set of assumptions concerning reactions of the authorities and the public, the scenarios show low enough future contamination that the impacts on the local economy are small.

The evaluation and the analysis of high-altitude electromagnetic pulse response of shielded enclosures require the availability of software tools able to acquire data and calculate shielding effectiveness...

We present the Convergent Close-Coupling (CCC) theory for the calculation of electron-helium scattering. We demonstrate its applicability at a range of projectile energies of 1.5 to 500 eV to scattering from the ground state to n ≤3 states. Excellent agreement with experiment is obtained with the available differential, integrated, ionization, and total cross sections, as well as with the electron-impact coherence parameters up to and including the 3 3 D state excitation. Comparison with other theories demonstrates that the CCC theory is the only general reliable method for the calculation of electron helium scattering. (authors). 66 refs., 2 tabs., 24 figs

Transmission Pipeline Calculations and Simulations Manual is a valuable time- and money-saving tool to quickly pinpoint the essential formulae, equations, and calculations needed for transmission pipeline routing and construction decisions. The manual's three-part treatment starts with gas and petroleum data tables, followed by self-contained chapters concerning applications. Case studies at the end of each chapter provide practical experience for problem solving. Topics in this book include pressure and temperature profile of natural gas pipelines, how to size pipelines for specified f

Ab Initio Valence Calculations in Chemistry describes the theory and practice of ab initio valence calculations in chemistry and applies the ideas to a specific example, linear BeH2. Topics covered include the Schrödinger equation and the orbital approximation to atomic orbitals; molecular orbital and valence bond methods; practical molecular wave functions; and molecular integrals. Open shell systems, molecular symmetry, and localized descriptions of electronic structure are also discussed. This book is comprised of 13 chapters and begins by introducing the reader to the use of the Schrödinge

Previously, an analytical dose calculation algorithm for MLC-based radiotherapy was developed and commissioned, which includes a detailed model of various MLC effects as a unique feature [1]. The algorithm was originally developed as an independent verification of the treatment planning system's dose calculation and it explicitly modeled spatial and depth dependent MLC effects such as interleaf transmission, the tongue-and-groove effect, rounded leaf ends, MLC scatter, beam hardening, and gradual MLC transmission fall-off with increasing off-axis distance. Originally the algorithm was implemented in Mathematica trademark (Wolfram). To speed up the calculation time and to be able to calculate high resolution 2D dose distributions within a reasonable time frame (<2 s) the algorithm needs to be optimized and to be embedded in a user friendly environment. To achieve this goal, the dose calculation model is implemented in Visual Basic 6.0, which decreases the calculation time moderately. More importantly, the numerical algorithm for dose calculation is changed at two levels: the dose contributions are split into their x- and y-contributions and the calculation is aperture- rather than as originally point-based. Implementing these three major changes, the calculation time is reduced considerably without loosing accuracy. The time for a typical IMRT field with about 2500 calculation points decreased from 2387 seconds to 0.624 seconds (a factor of about 3800). The mean agreement of the optimized and the not optimized calculation algorithm at the isocenter for a fairly complex IMRT plan with 23 fields is better than 1% relative to the local dose at the measuring point. (orig.)

This paper describes a practical approach using a general purpose lumped-parameter computer program, GFSSP (Generalized Fluid System Simulation Program) for calculating flow distribution in a network of micro-channels including electro-viscous effects due to the existence of electrical double layer (EDL). In this study, an empirical formulation for calculating an effective viscosity of ionic solutions based on dimensional analysis is described to account for surface charge and bulk fluid conductivity, which give rise to electro-viscous effect in microfluidics network. Two dimensional slit micro flow data was used to determine the model coefficients. Geometry effect is then included through a Poiseuille number correlation in GFSSP. The bi-power model was used to calculate flow distribution of isotropically etched straight channel and T-junction microflows involving ionic solutions. Performance of the proposed model is assessed against experimental test data.

The development of vector computer requires the modification of the algorithm into a suitable form for vector calculation. Among many algorithms, the particle code is a typical example which has suffered a damage in the calculation on supercomputer owing to its possibility of recurrent data access in collecting cell-wise quantities from particle's quartities. In this article, we report a new method to liberate the particle code from recurrent calculations. It should be noticed, however, that the method may depend on the architecture of supercomputer, and works well on FACOM VP-100 and VP-200: the indirect data accessing must be vectorized and its speed should be fast. (Mori, K.)

Practical Astronomy with your Calculator, first published in 1979, has enjoyed immense success. The author's clear and easy to follow routines enable you to solve a variety of practical and recreational problems in astronomy using a scientific calculator. Mathematical complexity is kept firmly in the background, leaving just the elements necessary for swiftly making calculations. The major topics are: time, coordinate systems, the Sun, the planetary system, binary stars, the Moon, and eclipses. In the third edition there are entirely new sections on generalised coordinate transformations, nutr

The relationship of sets of nuclear parameters and the macroscopic reactor quantities that can be calculated from them is examined. The framework of the study is similar to that of Usachev and Bobkov. The analysis is generalised and some properties required by common sense are demonstrated. The form of calculation permits revision of the parameter set. It is argued that any discrepancy between a calculation and measurement of a macroscopic quantity is more useful when applied directly to prediction of other macroscopic quantities than to revision of the parameter set. The mathematical technique outlined is seen to describe common engineering practice. (Author)

Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.

The Sandia Strehl Calculator is designed to calculate the Gibson and Lanni point spread function (PSF), Strehl ratio, and ensquared energy, allowing non-design immersion, coverslip, and sample layers. It also uses Abbe number calculations to determine the refractive index at specific wavelengths when given the refractive index at a different wavelength and the dispersion. The primary application of Sandia Strehl Calculator is to determine the theoretical impacts of using an optical microscope beyond its normal design parameters. Examples of non-design microscope usage include: a) using coverslips of non-design material b) coverslips of different thicknesses c) imaging deep into an aqueous sample with an immersion objective d) imaging a sample at 37 degrees. All of these changes can affect the imaging quality, sometimes profoundly, but are at the same time non-design conditions employed not infrequently. Rather than having to experimentally determine whether the changes will result in unacceptable image quality, Sandia Strehl Calculator uses existing optical theory to determine the approximate effect of the change, saving the need to perform experiments.

Full Text Available Background: The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics’ drug calculation abilities was first published in 2000 and for nurses’ abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student’s ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations.Methods: A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL, MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved.Results: The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects.Conclusions: This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting.[WestJEM. 2009;10:240-243.

Background: The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics’ drug calculation abilities was first published in 2000 and for nurses’ abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student’s ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations. Methods: A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved. Results: The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects. Conclusions: This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting. PMID:20046240

The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics' drug calculation abilities was first published in 2000 and for nurses' abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student's ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations. A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved. The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects. This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting.

A method recently developed by the authors allows efficient calculation of the periodic forced response to be performed for bladed discs with arbitrary nonlinearities, including friction contacts and gaps...

This paper describes a computer program that calculates National Fire Danger Rating Indexes. fuel moisture, buildup index, and drying factor are also available. The program is written in FORTRAN and is usable on even the smallest compiler.

A comparison is made between the theories of Bell and Leinaas and of Derbenev and Kondratenko for the spin polarization in electron storage rings. A calculation of polarization in HERA using the program SMILE of Mane is presented

This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.

A simple projectional technique combined with an equally simple parametric representation of the transient part of the neutron total flux is proposed for an elementary straightforward calculation of the extrapolation distance in diffusing media. (author)

Numerical studies of ELM stability and non-axisymmetric field penetration in diverted DIII-D and NSTX equilibria are presented, with resistive and finite Larmor radius effects included. These results are obtained with the nonlinear two-fluid code M3D-C1, which has recently been extended to allow linear non-axisymmetric calculations. Benchmarks of M3D-C1 with ideal codes ELITE and GATO show good agreement for the linear stability of peeling-ballooning modes in the ideal limit. New calculations of the resistive stability of ideally stable DIII-D equilibria are presented. M3D-C1 has also been used to calculate the linear response to non-axisymmetric external fields; these calculations are benchmarked with Surfmn and MARS-F. New numerical methods implemented in M3D-C1 are presented, including the treatment of boundary conditions with C^1 elements in a non-rectangular mesh.

Full Text Available Ab initio Hartree-Fock (HF method and Density Functional Theory (DFT were used to calculate the optical rotation of 26 chiral compounds. The effects of theory and basis sets used for calculation, solvents influence on the geometry and values of calculated optical rotation were all discussed. The polarizable continuum model, included in the calculation, did not improve the accuracy effectively, but it was superior to γs. Optical rotation of five or sixmembered of cyclic compound has been calculated and 17 pyrrolidine or piperidine derivatives which were calculated by HF and DFT methods gave acceptable predictions. The nitrogen atom affects the calculation results dramatically, and it is necessary in the molecular structure in order to get an accurate computation result. Namely, when the nitrogen atom was substituted by oxygen atom in the ring, the calculation result deteriorated.

Full Text Available Motivated by the work of Richardson’s extrapolation spreadsheet calculator up to level 4 to approximate definite differentiation, we have developed a Romberg integral spreadsheet calculator to approximate definite integral. The main feature of this version of spreadsheet calculator is a friendly graphical user interface developed to capture the needed information to solve the integral by Romberg method. Users simply need to enter the variable in the integral, function to be integrated, lower and upper limits of the integral, select the desired accuracy of computation, select the exact function if it exists and lastly click the Compute button which is associated with VBA programming written to compute Romberg integral table. The full solution of the Romberg integral table up to any level can be obtained quickly and easily using this method. The attached spreadsheet calculator together with this paper helps educators to prepare their marking scheme easily and assist students in checking their answers instead of reconstructing the answers from scratch. A summative evaluation of this Romberg Spreadsheet Calculator has been conducted by involving 36 students as sample. The data was collected using questionnaire. The findings showed that the majority of the students agreed that the Romberg Spreadsheet Calculator provides a structured learning environment that allows learners to be guided through a step-by-step solution.

Abacus experts have demonstrated extraordinary potential of mental calculation by using an imaginary abacus. But the neural correlates of abacus mental calculation and the imaginary abacus still remain unclear. Here, we report, respectively, the analysis of fMRI images of abacus experts and non-experts in response to the performance of simple and complex serial calculation by visual stimuli as well as the images of the abacus experts with performance of the same tasks by auditory stimuli. We found that activated areas were quite different between two groups. In experts, enhanced activations were mainly observed in fronto-temporal circuit (lateral premotor cortex (LPMC) and posterior temporal areas) in simple addition, but in fronto-parietal circuit (lateral premotor cortex (LPMC) and posterior superior parietal lobe (PSPL)) in complex one. By contrast, in controls, the activated areas were almost similar in both simple and complex tasks, including bilateral inferior parietal lobule, prefrontal and premotor cortices. Furthermore, visual and auditory stimuli generated almost similar activations in experts. These observations reveal that (1) abacus mental calculation induces special patterns of brain response, and simple and complex tasks are sustained by dissociated brain circuits between the temporal and parietal cortices, respectively; (2) the abacus mental calculation may rely on neural resources of visuospatial representations with a super-modal form of abacus beads; (3) the posterior temporal areas and PSPL may be recruited for imaginary abacus.

Full Text Available In the Standard Solar Model a central role in the nucleosynthesis is played by reactions of the kind XZ1A11+XZ2A22→YZ1+Z2A1+A2+γ${}_{{Z_1}}^{{A_1}}{X_1} + {}_{{Z_2}}^{{A_2}}{X_2} \\to {}_{{Z_1} + {Z_2}}^{{A_1} + {A_2}}Y + \\gamma $, which enter the proton-proton chains. These reactions can also be studied through the inverse photodisintegration reaction. One option is to use the Lorentz Integral Transform approach, which transforms the continuum problem into a bound state-like one. A way to check the reliability of such methods is a direct calculation, for example using the Kohn Variational Principle to obtain the scattering wave function and then directly calculate the response function of the reaction.

The integrity of a nuclear power plant during a postulated seismic event is required to protect the public against radiation. Therefore, a detailed set of seismic analyses of various structures and equipment is performed while designing a nuclear power plant. This report describes the structural response analysis method, including the structural model, soil-structure interaction as it relates to structural models, methods for seismic structural analysis, numerical integration methods, methods for non-seismic response analysis approaches for various response combinations, structural damping values, nonlinear response, uncertainties in structural properties, and structural response analysis using random properties. The report describes the state-of-the-art in these areas for nuclear power plants. It also details the past studies made at Sargent and Lundy to evaluate different alternatives and the conclusions reached for the specific purposes that those studies were intended. These results were incorporated here because they fall into the general scope of this report. The scope of the present task does not include performing new calculations

A new formulation is presented in this paper for the calculation of reactivity, which is simpler than the formulation that uses the Laplace and Z transforms. A treatment is also made to reduce the intensity of the noise found in the nuclear power signal used in the calculation of reactivity. Two classes of different filters are used for that. This treatment is based on the fact that the reactivity can be written by using the compose Simpson's rule resulting in a sum of two convolution terms with response to the impulse that is characteristic of a linear system. The linear part is calculated by using the filter named finite impulse response filter (FIR). The non-linear part is calculated using the filter exponentially adjusted by the least squares method, which does not cause attenuation in the reactivity calculation.

A new formulation is presented in this paper for the calculation of reactivity, which is simpler than the formulation that uses the Laplace and Z transforms. A treatment is also made to reduce the intensity of the noise found in the nuclear power signal used in the calculation of reactivity. Two classes of different filters are used for that. This treatment is based on the fact that the reactivity can be written by using the compose Simpson's rule resulting in a sum of two convolution terms with response to the impulse that is characteristic of a linear system. The linear part is calculated by using the filter named finite impulse response filter (FIR). The non-linear part is calculated using the filter exponentially adjusted by the least squares method, which does not cause attenuation in the reactivity calculation.

In applying atomic physics to problems of plasma diagnostics, it is necessary to determine some atomic characteristics, including energies and transition probabilities, for very many atoms and ions. Development of general codes for calculation of many types of atomic characteristics has been based on general but comparatively simple approximate methods. The program ATOM represents an attempt at effective use of such a general code. This report gives a brief description of the methods used, and the possibilities of and limitations to the code are discussed. Characteristics of the following processes can be calculated by ATOM: radiative transitions between discrete levels, radiative ionization and recombination, collisional excitation and ionization by electron impact, collisional excitation and ionization by point heavy particle (Born approximation only), dielectronic recombination, and autoionization. ATOM explores Born (for z=1) or Coulomb-Born (for z>1) approximations. In both cases exchange and normalization can be included. (N.K.)

The flow prediction involves the use of the three-dimensional Navier-Stokes solver N3S-NATUR. This compressible and turbulent finite volume / finite element code is able to perform multi-domain steady and now unsteady calculations through the external coupling module CALCIUM. The later is based on PVM (Parallel Virtual Machine). The originality of the method is the use of this external coupling module in such a way that each domain is computed with its own N3S code. Of course, a turbomachinery stage flow is always unsteady because of the rotor. When dealing with steady computations, the principal assumption is that the flow is steady relative to each domain individually and that each domain can communicate via mixing planes. These planes introduce circumferential averaging of the flow properties but preserve quite general radial variations. For unsteady calculations, the same method is used but without any circumferential averaging. Here, two fixed and three rotating blades are taken into account which involves the use of five different N3S-NATUR codes (one code for one blade). Of course, in order to perform such a calculation without any hypothesis, all the blades have to be modelled. Actually, such a calculation is done for a turbine stage of 23 fixed and 37 rotating blades (VEGA 2 turbine). In order to perform such a calculation in a realistic time, 60 processors of a parallel architecture computer are used. (authors)

This document serves as a user manual for the Packaged rooftop air conditioners and heat pump units comparison calculator (RTUCC) and is an aggregation of the calculator’s website documentation. Content ranges from new-user guide material like the “Quick Start” to the more technical/algorithmic descriptions of the “Methods Pages.” There is also a section listing all the context-help topics that support the features on the “Controls” page. The appendix has a discussion of the EnergyPlus runs that supported the development of the building-response models.

A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)

The Time Dependent Neutronics and Temperatures (TINTE) code system deals with the nuclear and the thermal transient behaviour of the primary circuit of the High-temperature Gas-cooled Reactor (HTGR), taking into consideration the mutual feedback effects in twodimensional axisymmetric geometry. This document contains a complete description of the theoretical basis of the TINTE nuclear calculation, including the equations solved, solution methods and the nuclear data used in the solution. (orig.)

Full Text Available The article below describes the history of thick plate calculation in Romania and its impact and recognition by the Department of Defense-“DoD” (Executive Department of the Government of the United States of America. The DoD has three subordinated departments: Army, Navy and Air Force. In addition, there are many Defense Agencies, such as the Defense Advanced Research Projects Agency and schools, including the National Defense University [1].

The nuclear performance of the ELMO Bumpy Torus Reactor reference design has been calculated using the one-dimensional discrete ordinates code ANISN and the latest available ENDF/B-IV transport cross-section data and nuclear response functions. The calculated results include estimates of the spatial and integral heating rate with emphasis on the recovery of fusion neutron energy in the blanket assembly and minimization of the energy deposition rates in the cryogenic magnet coil assemblies. The tritium breeding ratio in the natural lithium-laden blanket was calculated to be 1.29 tritium nuclei per incident neutron. The radiation damage in the reactor structural material and in the magnet assembly is also given

The radiation transport methodology comparing the calculated reactions and dose rates for neutrons and gama-rays, with experimental measurements obtained on iron shield, irradiated in the YAYOI reactor is evaluated. The ENDF/B-IV and VITAMIN-C libraries and the AMPX-II modular system, for cross sections generation collapsed by the ANISN code were used. The transport calculations were made using the DOT 3.5 code, adjusting the boundary iron shield source spectrum to the reactions and dose rates, measured at the beginning of shield. The neutron and gamma ray distributions calculated on the iron shield presented reasonable agreement with experimental measurements. An experimental arrangement using the IEA-R1 reactor to determine a shielding benchmark is proposed. (Author) [pt

Background/Aim Prediction tools are increasingly used to inform patients about the future dental health outcome. Advanced statistical methods are required to arrive at unbiased predictions based on follow-up studies. Material and Methods The Internet risk calculator at the Dental Trauma Guide......) in the period between 1972 and 1991. Subgroup analyses and estimates of event probabilities were based on the Kaplan-Meier and the Aalen-Johansen method. Results The Internet risk calculator shows individualized prognoses for the short and long-term healing outcome of traumatized teeth with the following...... were based on the tooth’s root development stage and other risk factors at the time of the injury. Conclusions This article explains the data base, the functionality and the statistical approach of the Internet risk calculator....

Preliminary calculations suggest that heterogeneous reactions are important in calculating the impact on ozone from emissions of trace gases from aircraft fleets. In this study, three heterogeneous chemical processes that occur on background sulfuric acid aerosols are included and their effects on O 3 , NO x , Cl x , HCl, N 2 O 5 , ClONO 2 are calculated

Preliminary calculations suggest that heterogeneous reactions are important in calculating the impact on ozone from emissions of trace gases from aircraft fleets. In this study, three heterogeneous chemical processes that occur on background sulfuric acid aerosols are included and their effects on O3, NO(x), Cl(x), HCl, N2O5, ClONO2 are calculated.

Calculations have been done for the spherical nuclei 40 Ca, 208 Pb and the hypothetical superheavy nucleus with Z=114, A=298, as well as for the deformed nucleus 168 Yb. The temperature T was varied from zero up to 5 MeV. For T>3 MeV, some numerical problems arise in connection with the optimization of the basis when calculating deformed nuclei. However, at these high temperatures the occupation numbers in the continuum are sufficiently large so that the nucleus starts evaporating particles and no equilibrium state can be described. Results are obtained for excitation energies and entropies. (Auth.)

Friction and Wear: Calculation Methods provides an introduction to the main theories of a new branch of mechanics known as """"contact interaction of solids in relative motion."""" This branch is closely bound up with other sciences, especially physics and chemistry. The book analyzes the nature of friction and wear, and some theoretical relationships that link the characteristics of the processes and the properties of the contacting bodies essential for practical application of the theories in calculating friction forces and wear values. The effect of the environment on friction and wear is a

The lattice formulation of quantum gauge theories is discussed as a viable technique for quantitative studies of nonperturbative effects in QCD. Evidence is presented to ascertain that whole classes of lattice actions produce a universal continuum limit. Discrepancies between numerical results from Monto Carlo simulations for the pure gauge system and for the system with gauge and quark fields are discussed. Numerical calculations for QCD require very substantial computational resources. The use of powerful vector processors of special purpose machines, in extending the scope and magnitude or the calculations is considered, and one may reasonably expect that in the near future good quantitative predictions will be obtained for QCD

Restricted Hartree-Fock and multi-configurational self-consistent-field calculations together with secondorder perturbation theory have been used to study the geometry, the electron density, and the electronicspectrum of (VO2SO4)-. A bidentate sulphate attachment to vanadium was found to be stable...... with anO-V-O angle of 72.5 degrees . The calculated spectrum shows bands in reasonable agreement with anexperimental spectrum which has been attributed to (VO2SO4)-. The geometry and the electron density fortwo binuclear vanadium complexes proposed as intermediates in the vanadium catalyzed SO2...

Ab initio calculations of Raman differential intensities are presented at the self-consistent field (SCF) level of theory. The electric dipole-electric dipole, electric dipole-magnetic dipole and electric dipole-electric quadrupole polarizability tensors are calculated at the frequency of the inc...... of the incident light, using SCF linear response theory. London atomic orbitals are employed, imposing gauge origin invariance on the calculations. Calculations have been carried out in the harmonic approximation for CFHDT and methyloxirane.......Ab initio calculations of Raman differential intensities are presented at the self-consistent field (SCF) level of theory. The electric dipole-electric dipole, electric dipole-magnetic dipole and electric dipole-electric quadrupole polarizability tensors are calculated at the frequency...

Calculations in Fundamental Physics, Volume I: Mechanics and Heat focuses on the mechanisms of heat. The manuscript first discusses motion, including parabolic, angular, and rectilinear motions, relative velocity, acceleration of gravity, and non-uniform acceleration. The book then discusses combinations of forces, such as polygons and resolution, friction, center of gravity, shearing force, and bending moment. The text looks at force and acceleration, energy and power, and machines. Considerations include momentum, horizontal or vertical motion, work and energy, pulley systems, gears and chai

First-principles calculations can be a powerful predictive tool for studying, modeling and understanding the fundamental scattering mechanisms impacting carrier transport in materials. In the past, calculations have provided important qualitative insights, but numerical accuracy has been limited due to computational challenges. In this talk, we will discuss some of the challenges involved in calculating electron-phonon scattering and carrier mobility, and outline approaches to overcome them. Topics will include the limitations of models for electron-phonon interaction, the importance of grid sampling, and the use of Gaussian smearing to replace energy-conserving delta functions. Using prototypical examples of oxides that are of technological importance-SrTiO3, BaSnO3, Ga2O3, and WO3-we will demonstrate computational approaches to overcome these challenges and improve the accuracy. One approach that leads to a distinct improvement in the accuracy is the use of analytic functions for the band dispersion, which allows for an exact solution of the energy-conserving delta function. For select cases, we also discuss direct quantitative comparisons with experimental results. The computational approaches and methodologies discussed in the talk are general and applicable to other materials, and greatly improve the numerical accuracy of the calculated transport properties, such as carrier mobility, conductivity and Seebeck coefficient. This work was performed in collaboration with B. Himmetoglu, Y. Kang, W. Wang, A. Janotti and C. G. Van de Walle, and supported by the LEAST Center, the ONR EXEDE MURI, and NSF.

The most radiation-sensitive segment of our population is the developing fetus. Until recently, methods available for calculating the dose to the fetus were inadequate because a model for the pregnant woman was not available. Instead, the Snyder and Fisher model of Reference Man, which includes a uterus, was frequently used to calculate absorbed fractions when the source was in various organs of the body and the nongravid uterus was the target. These values would be representative of the dose to the embryo during the early stages of pregnancy. Unfortunately, Reference Man is considerable larger than Reference Woman. The authors recently reported on the design of a Reference Woman phantom that has dimensions quite similar to the ICRP Reference Woman. This phantom was suitable for calculating the dose to the embryo during early stages of pregnancy (0 to 3 mo.), but was not suitable for the later stages of pregnancy because of the changing shape of the mother and the displacement of several abdominal organs brought about by the growth of the uterus and fetus. The models of Reference Woman that were subsequently developed for each month of pregnancy are described. The models take into account the growth of the uterus and fetus and the repositioning of the various abdominal organs. These models have been used to calculate absorbed fractions for the fetus as a target and the gastrointestinal tract as a source of radiation for twelve photon energies ranging from 10 keV to 4 MeV

A new coil protection system (CPS) is being developed to replace the existing TFTR magnetic coil fault detector. The existing fault detector sacrifices TFTR operating capability for simplicity. The new CPS, when installed in October of 1988, will permit operation up to the actual coil stress limits parameters in real-time. The computation will be done in a microprocessor based Coil Protection Calculator (CPC) currently under construction at PPL. THe new CPC will allow TFTR to operate with higher plasma currents and will permit the optimization of pulse repetition rates. The CPC will provide real-time estimates of critical coil and bus temperatures and stresses based on real-time redundant measurements of coil currents, coil cooling water inlet temperature, and plasma current. The critical parameter calculations are compared to prespecified limits. If these limits are reached or exceeded, protection action will be initiated to a hard wired control system (HCS), which will shut down the power supplies. The CPC consists of a redundant VME based microprocessor system which will sample all input data and compute all stress quantities every ten milliseconds. Thermal calculations will be approximated every 10ms with an exact solution occurring every second. The CPC features continuous cross-checking of redundant input signal, automatic detection of internal failure modes, monitoring and recording of calculated results, and a quick, functional verification of performance via an internal test system. (author)

Physical principles of heat transfer between fluid under turbulent flow conditions and a wall of a duct are described. The methods of calculations of heat transfer coefficient and the theory of recuperative heat exchangers are presented. Numerical examples are given to illustrate the theory. (author)

The Environmental footprint is a very powerful tool that helps an individual to understand how their everyday activities are impacting environmental surroundings. Data shows that global climate change, which is a growing concern for nations all over the world, is already affecting humankind, plants and animals through raising ocean levels, droughts & desertification and changing weather patterns. In addition to a wide range of policy measures implemented by national and state governments, it is necessary for individuals to understand the impact that their lifestyle may have on their personal environmental footprint, and thus over the global climate change. "My Footprint Calculator" (myfootprintcalculator.com) has been designed to be one the simplest, yet comprehensive, web tools to help individuals calculate and understand their personal environmental impact. "My Footprint Calculator" is a website that queries users about their everyday habits and activities and calculates their personal impact on the environment. This website was re-designed to help users determine their environmental impact in various aspects of their lives ranging from transportation and recycling habits to water and energy usage with the addition of new features that will allow users to share their experiences and their best practices with other users interested in reducing their personal Environmental footprint. The collected data is stored in the database and a future goal of this work plans to analyze the collected data from all users (anonymously) for developing relevant trends and statistics.

No author has gone as far as Doerfler in covering methods of mental calculation beyond simple arithmetic. Even if you have no interest in competing with computers you'll learn a great deal about number theory and the art of efficient computer programming. -Martin Gardner

In this paper we present the first application of the ZORA (Zeroth Order Regular Approximation of the Dirac Fock equation) formalism in Ab Initio electronic structure calculations. The ZORA method, which has been tested previously in the context of Density Functional Theory, has been implemented in

The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.

This "Professional Growth & Support Spending Calculator" helps school systems quantify all current spending aimed at improving teaching effectiveness. Part I provides worksheets to analyze total investment. Part II provides a system for evaluating investments based on purpose, target group, and delivery. In this Spending Calculator…

This article reports on a qualitative study of six high school calculus students designed to build an understanding about the affect associated with graphing calculator use in independent situations. DeBellis and Goldin's (2006) framework for affect as a representational system was used as a lens through which to understand the ways in which…

Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

Restricted Hartree-Fock and multi-configurational self-consistent-field calculations together with secondorder perturbation theory have been used to study the geometry, the electron density, and the electronicspectrum of (VO2SO4)-. A bidentate sulphate attachment to vanadium was found to be stable...

Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.

The new version of software package for neutron calculation of WWER cores KASKAD 2007 consists of some calculating and service modules, which are integrated in the common framework. The package is based on the old version, which was expanded with some new functions and the new calculating modules, such as: -the BIPR-2007 code is the new one which performs calculation of power distribution in three-dimensional geometry for 2-group neutron diffusion calculation. This code is based on the BIPR-8KN model, provides all possibilities of BIPR-7A code and uses the same input data; -the PERMAK-2007 code is pin-by-pin few-group multilayer and 3-D code for neutron diffusion calculation; -graphical user interface for input data preparation of the TVS-M code. The report also includes some calculation results obtained with modified version of the KASKAD 2007 package. (Authors)

This work implemented an anthropomorphic phantom of voxels on the structure of Monte Carlo GEANT4, for utilization by professionals from the radioprotection, external dosimetry and medical physics. This phantom allows the source displacement that can be isotropic punctual, plain beam, linear or radioactive gas, in order to obtain diverse irradiation geometries. In them, the radioactive sources exposure is simulated viewing the determination of effective dose or the dose in each organ of the human body. The Zubal head and body trunk phantom was used, and we can differentiate the organs and tissues by the chemical constitution in soft tissue, lung tissue, bone tissue, water and air. The calculation method was validated through the comparison with other well established method, the Visual Monte Carlo (VMC). Besides, a comparison was done with the international recommendation for the evaluation of dose by exposure to punctual sources, described in the document TECDOC - 1162- Generic Procedures for Assessment and Response During a Radiological Emergency, where analytical expressions for this calculation are given. Considerations are made on the validity limits of these expressions for various irradiation geometries, including linear sources, immersion into clouds and contaminated soils

for lysozyme activity and a colorimetric one for protein concentration. Familiarity with the assays is reinforced by an independently designed project to modify a variable in one of these assays. The assay for lysozyme activity is that of Shugar (6), based on hydrolysis of a cell-wall suspension from the bacterium Micrococcus lysodeikticus, a substrate that is particularly sensitive to lysozyme. As the cell walls are broken down by the enzyme, the turbidity of the sample decreases. This decrease can be conveniently measured by following the decrease in absorbance at a wavelength of 450 nm, using a spectrophotometer or other device for measuring light scattering. The Bradford method (7), a standard assay, is used to determine protein concentration. Using the data from both lysozyme activity assays and protein concentration assays, students can calculate the specific activity for commercial lysozyme and an egg- white solution. These calculations clearly demonstrate the increase in specific activity with increasing purity, since the purified (commercial) preparation has a specific activity approximately 20-fold higher than that of the crude egg-white solution. Lysozyme Purification by Ion-Exchange Chromatography (5 weeks) As suggested by Strang (8), students can design a rational purification of lysozyme using ion-exchange chromatography when presented with information on the isoelectric point of the enzyme and the properties of ion- exchange resins. One week is spent discussing protein purification and the relative advantages and disadvantages of different resins. Each group has a choice of anion-exchange (DEAE) or cation-exchange (CM) resins. Because lysozyme is positively charged below a pH of 11, it will not be adsorbed to an anion-exchange resin, but will be adsorbed to the cation-exchange resin. Therefore, for the cation-exchange protocols, there are further options for methods of collecting and eluting the desired protein. A purification table, including

Free energy calculations on three model processes with theoretically known free energy changes have been performed using short simulation times. A comparison between equilibrium (thermodynamic integration) and non-equilibrium (fast growth) methods has been made in order to assess the accuracy and precision of these methods. The three processes have been chosen to represent processes often observed in biomolecular free energy calculations. They involve a redistribution of charges, the creation and annihilation of neutral particles and conformational changes. At very short overall simulation times, the thermodynamic integration approach using discrete steps is most accurate. More importantly, reasonable accuracy can be obtained using this method which seems independent of the overall simulation time. In cases where slow conformational changes play a role, fast growth simulations might have an advantage over discrete thermodynamic integration where sufficient sampling needs to be obtained at every λ-point, but only if the initial conformations do properly represent an equilibrium ensemble. From these three test cases practical lessons can be learned that will be applicable to biomolecular free energy calculations

Calculations of the mean stress in a plastically deformed matrix containing randomly distributed elastic inclusions are considered. The mean stress for an elastically homogeneous material is calculated on the basis of an energy consideration which completely accounts for elastic interactions....... The result is shown to be identical to that obtained from a stress calculation. The possibility of including elastic interactions in the case of elastic inhomogeneity is discussed....

and the computationally efficient implicit Floquet analysis in anisotropic conditions. The tool is validated against system identifications with the partial Floquet method on the nonlinear BHawC model of a 2.3 MW wind turbine. System identification results show that nonlinear effects on the 2.3 MW turbine in most cases....... These harmonics appear in calculated frequency responses of the turbine. Extreme wind shear changes the modal damping when the flow is separated due to an interaction between the periodic mode shape and the local aerodynamic damping influenced by a periodic variation in angle of attack....

The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of

This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode

The scope of this document is to develop the size, operational envelopes, and major requirements of the equipment to be used in the vestibule, cask preparation area, and the crane maintenance area of the Fuel Handling Facility. This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAIC Company L.L.C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Ref. 167124). This correspondence was appended by further correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-01R W12101; TDL No. 04-024'' (Ref. 16875 1). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process.

The scope of this document is to develop the size, operational envelopes, and major requirements of the equipment to be used in the vestibule, cask preparation area, and the crane maintenance area of the Fuel Handling Facility. This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAIC Company L.L.C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC--28-01R W12101'' (Ref. 167124). This correspondence was appended by further correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC--28-01R W12101; TDL No. 04-024'' (Ref. 16875 1). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process

A general method (MCQ) has been developed by introducing a microscopic burnup scheme that uses the Monte Carlo calculated fluxes and microscopic reaction rates of a complex system and a depletion code for burnup calculations as a basis for solving nuclide material balance equations for each spatial region in which the system is divided. Continuous energy-dependent cross-section libraries and full 3D geometry of the system can be input for the calculations. The resulting predictions for the system at successive burnup time steps are thus based on a calculation route where both geometry and cross sections are accurately represented, without geometry simplifications and with continuous energy data, providing an independent approach for benchmarking other methods and nuclear data of actinides, fission products, and other burnable absorbers. The main advantage of this method over the classical deterministic methods currently used is that the MCQ System is a direct 3D method without the limitations and errors introduced on the homogenization of geometry and condensation of energy of deterministic methods. The Monte Carlo and burnup codes adopted until now are the widely used MCNP and ORIGEN codes, but other codes can be used also. For using this method, there is need of a well-known set of nuclear data for isotopes involved in burnup chains, including burnable poisons, fission products, and actinides. For fixing the data to be included in this set, a study of the present status of nuclear data is performed, as part of the development of the MCQ method. This study begins with a review of the available cross-section data of isotopes involved in burnup chains for power and research nuclear reactors. The main data needs for burnup calculations are neutron cross sections, decay constants, branching ratios, fission energy, and yields. The present work includes results of selected experimental benchmarks and conclusions about the sensitivity of different sets of cross

We study the solution of the nonlinear Balitsky-Kovchegov evolution equation with the recently calculated running coupling corrections [I. I. Balitsky, Phys. Rev. D 75, 014001 (2007). and Y. Kovchegov and H. Weigert, Nucl. Phys. A784, 188 (2007).]. Performing a numerical solution we confirm the earlier result of Albacete et al. [Phys. Rev. D 71, 014003 (2005).] (obtained by exploring several possible scales for the running coupling) that the high energy evolution with the running coupling leads to a universal scaling behavior for the dipole-nucleus scattering amplitude, which is independent of the initial conditions. It is important to stress that the running coupling corrections calculated recently significantly change the shape of the scaling function as compared to the fixed coupling case, in particular, leading to a considerable increase in the anomalous dimension and to a slow-down of the evolution with rapidity. We then concentrate on elucidating the differences between the two recent calculations of the running coupling corrections. We explain that the difference is due to an extra contribution to the evolution kernel, referred to as the subtraction term, which arises when running coupling corrections are included. These subtraction terms were neglected in both recent calculations. We evaluate numerically the subtraction terms for both calculations, and demonstrate that when the subtraction terms are added back to the evolution kernels obtained in the two works the resulting dipole amplitudes agree with each other. We then use the complete running coupling kernel including the subtraction term to find the numerical solution of the resulting full nonlinear evolution equation with the running coupling corrections. Again the scaling regime is recovered at very large rapidity with the scaling function unaltered by the subtraction term

total cardiovascular risk score. During development of joint guidelines released in 2013 by the American College of Cardiology (ACC) and American Heart Association (AHA), the decision was taken to develop a new risk score. This resulted in the ACC/AHA Pooled Cohort Equations Risk Calculator. This risk...... strengths are its inclusion of stroke as an end point and race as a characteristic, which allows better risk prediction especially in African-American individuals, plus provision of lifetime ASCVD risk estimates for adults aged 20-59 years. Notable omissions from the risk factors include chronic kidney...... the intended 7.5% 10-year ASCVD risk threshold for treatment in the joint ACC/AHA cholesterol guidelines. In this review we discuss the development of the new risk calculator, its strengths and weaknesses, and potential implications for its routine use....

This chapter calculates vapor-liquid equilibrium (VLE) values for a number of binary systems of cryogenic interest, including hydrogen- and helium-containing mixtures, by means of several selected cubic equations of state using different sets of mixing rules. The aim is to test the capabilities of these equations for representing VLE values for the selected mixtures, and to identify and recommend the most suitable equation of state together with its compatible mixing rules for the desired data representation. It is determined that the conventional mixing rules together with the modified van der Waals equation, or the four-parameter equation, are suitable for calculating VLE values for the selected systems at cryogenic conditions. The Peng-Robinson and four-parameter equations may yield slightly better results for helium-containing systems

This report presents the results of wind tunnel tests of a wing in combination with each of three sizes of Fowler flap. The purpose of the investigation was to determine the aerodynamic characteristics as affected by flap chord and position, the air loads on the flaps, and the effect of flaps on the downwash.

This paper discusses methodological issues relevant to the calculation of historical responsibility of countries for climate change ('The Brazilian Proposal'). Using a simple representation of the climate system, the paper compares contributions to climate change using different indicators: current radiative forcing, current GWP-weighted emissions, radiative forcing from increased concentrations, cumulative GWP-weighted emissions, global-average surface-air temperature increase and two new indicators: weighted concentrations (analogue to GWP-weighted emissions) and integrated temperature increase. Only the last two indicators are at the same time 'backward looking' (take into account historical emissions), 'backward discounting' (early emissions weigh less, depending on the decay in the atmosphere) and 'forward looking' (future effects of the emissions are considered) and are comparable for all gases. Cumulative GWP-weighted emissions are simple to calculate but are not 'backward discounting'. 'Radiative forcing' and 'temperature increase' are not 'forward looking'. 'Temperature increase' discounts the emissions of the last decade due to the slow response of the climate system. It therefore gives low weight to regions that have recently significantly increased emissions. Results of the five different indicators are quite similar for large groups (but possibly not for individual countries): industrialized countries contributed around 60% to today's climate change, developing countries around 40% (using the available data for fossil, industrial and forestry CO2, CH4 and N2O). The paper further argues including non-linearities of the climate system or using a simplified linear system is a political choice. The paper also notes that results of contributions to climate change need to be interpreted with care: Countries that developed early benefited economically, but have high historical emission, and countries developing at a later period can profit from developments

htmlabstractDynamic languages include a number of features that are challenging to model properly in static analysis tools. In PHP, one of these features is the include expression, where an arbitrary expression provides the path of the file to include at runtime. In this paper we present two

An enhanced environmental barrier coating for a silicon containing substrate. The enhanced barrier coating may include a bond coat doped with at least one of an alkali metal oxide and an alkali earth metal oxide. The enhanced barrier coating may include a composite mullite bond coat including BSAS and another distinct second phase oxide applied over said surface.

Rare thoracic cancers include those of the trachea, thymus and mesothelioma (including peritoneum mesothelioma). The aim of this study was to describe the incidence, prevalence and survival of rare thoracic tumours using a large database, which includes cancer patients diagnosed from 1978 to 2002,

Rare thoracic cancers include those of the trachea, thymus and mesothelioma (including peritoneum mesothelioma). The aim of this study was to describe the incidence, prevalence and survival of rare thoracic tumours using a large database, which includes cancer patients diagnosed from 1978 to 2002,

A new coil protection calculator (CPC) is presented in this paper. It is now being developed for TFTR's magnetic field coils will replace the existing coil fault detector. The existing fault detector sacrifices TFTR operating capability for simplicity. The new CPC will permit operation up to the actual coil limits by accurately and continuously computing coil parameters in real-time. The improvement will allow TFTR to operate with higher plasma currents and will permit the optimization of pulse repetition rates

The cost and complexity of pipe support design has been a continuing challenge to the construction and modification of commercial nuclear facilities. Typically, pipe support design or qualification projects have required large numbers of engineers centrally located with access to mainframe computer facilities. Much engineering time has been spent repetitively performing a sequence of tasks to address complex design criteria and consolidating the results of calculations into documentation packages in accordance with strict quality requirements. The continuing challenges of cost and quality, the need for support engineering services at operating plant sites, and the substantial recent advances in microcomputer systems suggested that a stand-alone microcomputer pipe support calculation generator was feasible and had become a necessity for providing cost-effective and high quality pipe support engineering services to the industry. This paper outlines the preparation for, and the development of, an integrated pipe support design/evaluation software system which maintains all computer programs in the same environment, minimizes manual performance of standard or repetitive tasks, and generates a high quality calculation which is consistent and easily followed

Pre-waste-emplacement groundwater travel time is one indicator of the isolation capability of the geologic system surrounding a repository. Two distinct modeling approaches exist for prediction of groundwater flow paths and travel times from the repository location to the designated accessible environment boundary. These two approaches are: (1) the deterministic approach which calculates a single value prediction of groundwater travel time based on average values for input parameters and (2) the stochastic approach which yields a distribution of possible groundwater travel times as a function of the nature and magnitude of uncertainties in the model inputs. The purposes of this report are to (1) document the theoretical (i.e., mathematical) basis used to calculate groundwater pathlines and travel times in a basalt system, (2) outline limitations and ranges of applicability of the deterministic modeling approach, and (3) explain the motivation for the use of the stochastic modeling approach currently being used to predict groundwater pathlines and travel times for the Hanford Site. Example calculations of groundwater travel times are presented to highlight and compare the differences between the deterministic and stochastic modeling approaches. 28 refs

The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

This note discusses various ways the parasitic mode losses from a bunched beam to a vacuum chamber can be measured, calculated or estimated. A listing of the parameter, k, for the various PEP ring components is included. A number of formulas for calculating multiple and single pass losses are discussed and evaluated for several cases. 25 refs., 1 fig., 1 tab

Furthermore, GIAO/DFT (Gauge Including Atomic Orbitals/Density. Functional Theory) approach is extensively used for the calculations of chemical shifts for various types of compounds [14-20]. During the last decade an important breakthrough in the calculation of NMR spin-spin coupling constants took place when the ...

The project's aim has been to present updated environmental-economic calculation prices which make it possible to differentiate between traffic sources and stationary sources of air pollution. Furthermore, for the first time calculation prices for emissions to the aquatic environment and for the heavy metal lead are included. (ln)

strange quark spin from the anomalous Ward identity. Recently, we have started to include multiple lattices with different lattice spacings and different volumes including large lattices at the physical pion mass point. We are getting quite close to being able to calculate the hadron structure at the physical point and to do the continuum and large volume extrapolations, which is our ultimate aim. We have now finished several projects which have included these systematic corrections. They include the leptonic decay width of the ρ, the πN sigma and strange sigma terms, and the strange quark magnetic moment. Over the years, we have also studied hadron spectroscopy with lattice calculations and in phenomenology. These include Roper resonance, pentaquark state, charmonium spectrum, glueballs, scalar mesons a 0 (1450) and σ(600) and other scalar mesons, and the 1 -+ meson. In addition, we have employed the canonical approach to explore the first-order phase transition and the critical point at finite density and finite temperature. We have also discovered a new parton degree of freedom -- the connected sea partons, from the path-integral formulation of the hadronic tensor, which explains the experimentally observed Gottfried sum rule violation. Combining experimental result on the strange parton distribution, the CT10 global fitting results of the total u and d anti-partons and the lattice result of the ratio of the momentum fraction of the strange vs that of u or d in the disconnected insertion, we have shown that the connected sea partons can be isolated. In this final technical report, we shall present a few representative highlights that have been achieved in the project.

decomposition and the strange quark spin from the anomalous Ward identity. Recently, we have started to include multiple lattices with different lattice spacings and different volumes including large lattices at the physical pion mass point. We are getting quite close to being able to calculate the hadron structure at the physical point and to do the continuum and large volume extrapolations, which is our ultimate aim. We have now finished several projects which have included these systematic corrections. They include the leptonic decay width of the ρ, the πN sigma and strange sigma terms, and the strange quark magnetic moment. Over the years, we have also studied hadron spectroscopy with lattice calculations and in phenomenology. These include Roper resonance, pentaquark state, charmonium spectrum, glueballs, scalar mesons a0(1450) and σ(600) and other scalar mesons, and the 1-+ meson. In addition, we have employed the canonical approach to explore the first-order phase transition and the critical point at finite density and finite temperature. We have also discovered a new parton degree of freedom -- the connected sea partons, from the path-integral formulation of the hadronic tensor, which explains the experimentally observed Gottfried sum rule violation. Combining experimental result on the strange parton distribution, the CT10 global fitting results of the total u and d anti-partons and the lattice result of the ratio of the momentum fraction of the strange vs that of u or d in the disconnected insertion, we have shown that the connected sea partons can be isolated. In this final technical report, we shall present a few representative highlights that have been achieved in the project.

We present an iterative method to calculate the nonlinear optical response of armchair graphene nanoribbons (aGNRs) and zigzag graphene nanoribbons (zGNRs) while including the effects of dissipation. In contrast to methods that calculate the nonlinear response in the ballistic (dissipation-free) regime, here we obtain the nonlinear response of an electronic system to an external electromagnetic field while interacting with a dissipative environment (to second order). We use a self-consistent-field approach within a Markovian master-equation formalism (SCF-MMEF) coupled with full-wave electromagnetic equations, and we solve the master equation iteratively to obtain the higher-order response functions. We employ the SCF-MMEF to calculate the nonlinear conductance and susceptibility, as well as to calculate the dependence of the plasmon dispersion and plasmon propagation length on the intensity of the electromagnetic field in GNRs. The electron scattering mechanisms included in this work are scattering with intrinsic phonons, ionized impurities, surface optical phonons, and line-edge roughness. Unlike in wide GNRs, where ionized-impurity scattering dominates dissipation, in ultra-narrow nanoribbons on polar substrates optical-phonon scattering and ionized-impurity scattering are equally prominent. Support by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0008712.

The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today, but will include sufficient detail to demonstrate a few points with regard to the handling of losses.

In core design calculation, nuclear data takes part as multi group cross section library during the assembly calculation, which is the first stage of a core design calculation. This report summarizes the multi group cross section libraries used in assembly calculations and also presents the methods adopted for resonance and assembly calculation. (author)

A pressure vessel includes a ported fitting having an annular flange formed on an end thereof and a tank that envelopes the annular flange. A crack arresting barrier is bonded to and forming a lining of the tank within the outer surface thereof. The crack arresting barrier includes a cured resin having a post-curing ductility rating of at least approximately 60% through the cured resin, and further includes randomly-oriented fibers positioned in and throughout the cured resin.

Embedding the buildings in soil changes their seismic response behaviour as compared to surface buildings, i.e. higher stiffness and increased radiation damping is attained. Finite element models are best suited for determinig the effects of embedment and of layered subsoil. The code used was the LUSH2-programme, which is applicable to 2-dimensional problems and provides an approximate treatment for non-linear dynamic soil behaviour. For embedded buildings there is a good agreement between 2- and 3-dimensional models of the response for points below the soil surface. It is therefore permissible to use the less costly 2-dimensional programmes. To simulate earthquake, three different acceleration-time histories, derived from actual measurements and from artificial synthesis, with differing response spectra were fed in. The soil characteristics assumed are applicable to a representative site in Germany. Three different types of models were examined, using analytical models with only a few elements for parametric studies and with up to 716 elements for more precise calculations. A comparison was made between the semi-embedment, the total embedment, and installation of the reactor building above-ground. (orig.) [de

.... In order to represent the organizational impact on the work process, five organizational cultural parameters were identified and included in an algorithm for modeling and simulation of cultural...

At its 108th session on the 20 June 1997, the Council approved the Report of the Finance Committee Working Group on the Review of CERN Purchasing Policy and Procedures. Among other topics, the report recommended the inclusion of utility supplies in the calculation of the return statistics as soon as the relevant markets were deregulated, without reaching a consensus on the exact method of calculation. At its 296th meeting on the 18 June 2003, the Finance Committee approved a proposal to award a contract for the supply of electrical energy (CERN/FC/4693). The purpose of the proposal in this document is to clarify the way electrical energy will be included in future calculations of the return statistics. The Finance Committee is invited: 1. to agree that the full cost to CERN of electrical energy (excluding the cost of transport) be included in the Industrial Service return statistics; 2. to recommend that the Council approves the corresponding amendment to the Financial Rules set out in section 2 of this docum...

Current work and trends in the application of neutron diffusion theory to reactor design and analysis are reviewed. Specific topics covered include finite-difference methods, synthesis methods, nodal calculations, finite-elements and perturbation theory

.... Using our approach, the bulk modulus, effective elastic stiffnesses C11, C12, and C44 of the strained silicon, including also the effective Young's modulus and Poisson's ratio, are all calculated...

Air quality models, calculators, guidance and strategies are offered for estimating and projecting vehicle air pollution, including ozone or smog-forming pollutants, particulate matter and other emissions that pose public health and air quality concerns.

This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL’s Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.

This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL's Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.

1000 Ω and 1290.64 Ω coaxial resistors with calculable frequency dependence have been realized at PTB to be used in quantum Hall effect-based impedance measurements. In contradistinction to common designs of coaxial resistors, the design described in this paper makes it possible to remove the resistive element from the shield and to handle it without cutting the outer cylindrical shield of the resistor. Emphasis has been given to manufacturing technology and suppressing unwanted sources of frequency dependence. The adjustment accuracy is better than 10 µΩ Ω −1

The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence...... can be fine-tuned with great flexibility by choosing appropriate Rydberg states and applying external electric and magnetic fields. More and more experiments are probing this interaction at short atomic distances or with such high precision that perturbative calculations as well as restrictions...

Three problems of a reactor-calculational model are discussed with the help of symmetry considerations. 1/ A coarse mesh method applicable to any geometry is derived. It is shown that the coarse mesh solution can be constructed from a few standard boundary value problems. 2/ A second stage homogenization method is given based on the Bloch theorem. This ensures the continuity of the current and the flux at the boundary. 3/ The validity of the micro-macro separation is shown for heterogeneous lattices. A formula for the neutron density is derived for cell homogenization. (author)

We present perturbative calculations with the Wilson loop (WL). The dimensional regularization method is used with a special attention concerning to the problem of divergences in the WL expansion in second and fourth orders, in three and four dimensions. We show that the residue in the pole, in 4d, of the fourth order graphs contribution sum is important for the charge renormalization. We compute up to second order the exact expression of the WL, in three-dimensional gauge theories with topological mass as well as its assimptotic behaviour for small and large distances. the author [pt

Five different methods which can be used to analytically calculate entropies that are nonconcave as functions of the energy in the thermodynamic limit are discussed and compared. The five methods are based on the following ideas and techniques: (i) microcanonical contraction, (ii) metastable branches of the free energy, (iii) generalized canonical ensembles with specific illustrations involving the so-called Gaussian and Betrag ensembles, (iv) the restricted canonical ensemble, and (v) the inverse Laplace transform. A simple long-range spin model having a nonconcave entropy is used to illustrate each method