Full Text Available The lack of evaluation standard for safety coefficient based on finite elementmethod (FEM limits the wide application of FEM in roller compacted concrete dam (RCCD. In this paper, the strength reserve factor (SRF method is adopted to simulate gradual failure and possible unstable modes of RCCD system. The entropy theory and catastrophe theory are used to obtain the ultimate bearing resistance and failure criterion of the RCCD. The most dangerous sliding plane for RCCD failure is found using the Latin hypercube sampling (LHS and auxiliary analysis of partial least squares regression (PLSR. Finally a method for determining the evaluation standard of RCCD safety coefficient based on FEM is put forward using least squares support vector machines (LSSVM and particle swarm optimization (PSO. The proposed method is applied to safety coefficient analysis of the Longtan RCCD in China. The calculation shows that RCCD failure is closely related to RCCD interface strength, and the Longtan RCCD is safe in the design condition. Considering RCCD failure characteristic and combining the advantages of several excellent algorithms, the proposed method determines the evaluation standard for safety coefficient of RCCD based on FEM for the first time and can be popularized to any RCCD.

Full Text Available The paper presents a framework for the construction of Monte Carlo finite volume elementmethod (MCFVEM for the convection-diffusion equation with a random diffusion coefficient, which is described as a random field. We first approximate the continuous stochastic field by a finite number of random variables via the Karhunen-Loève expansion and transform the initial stochastic problem into a deterministic one with a parameter in high dimensions. Then we generate independent identically distributed approximations of the solution by sampling the coefficient of the equation and employing finite volume element variational formulation. Finally the Monte Carlo (MC method is used to compute corresponding sample averages. Statistic error is estimated analytically and experimentally. A quasi-Monte Carlo (QMC technique with Sobol sequences is also used to accelerate convergence, and experiments indicate that it can improve the efficiency of the Monte Carlo method.

A novel two-dimensional finite elementmethod for modelling the diffusion which occurs in Fricke or ferrous sulphate type radiation dosimetry gels is presented. In most of the previous work, the diffusion coefficient has been estimated using simple one-dimensional models. This work presents a two-dimensional model which enables the diffusion coefficient to be determined in a much wider range of experimental situations. The model includes the provision for the determination of a drift parameter. To demonstrate the technique comparative diffusion measurements between ferrous sulphate radiation dosimetry gels, with and without xylenol orange chelating agent and carbohydrate additives have been undertaken. Diffusion coefficients of 9.7±0.4, 13.3±0.6 and 9.5±0.8 10-3 cm 2 per h -1 were determined for ferrous sulphate radiation dosimetry gels with and without xylenol orange and with xylenol orange and sucrose additives respectively. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

Balancing Domain Decomposition by Constraints (BDDC) methods have proven to be powerful preconditioners for large and sparse linear systems arising from the finite element discretization of elliptic PDEs. Condition number bounds can be theoretically established that are independent of the number of subdomains of the decomposition. The core of the methods resides in the design of a larger and partially discontinuous finite element space that allows for fast application of the preconditioner, where Cholesky factorizations of the subdomain finite element problems are additively combined with a coarse, global solver. Multilevel and highly-scalable algorithms can be obtained by replacing the coarse Cholesky solver with a coarse BDDC preconditioner. BDDC methods have the remarkable ability to control the condition number, since the coarse space of the preconditioner can be adaptively enriched at the cost of solving local eigenproblems. The proper identification of these eigenproblems extends the robustness of the methods to any heterogeneity in the distribution of the coefficients of the PDEs, not only when the coefficients jumps align with the subdomain boundaries or when the high contrast regions are confined to lie in the interior of the subdomains. The specific adaptive technique considered in this paper does not depend upon any interaction of discretization and partition; it relies purely on algebraic operations. Coarse space adaptation in BDDC methods has attractive algorithmic properties, since the technique enhances the concurrency and the arithmetic intensity of the preconditioning step of the sparse implicit solver with the aim of controlling the number of iterations of the Krylov method in a black-box fashion, thus reducing the number of global synchronization steps and matrix vector multiplications needed by the iterative solver; data movement and memory bound kernels in the solve phase can be thus limited at the expense of extra local ops during the setup of

X-ray absorption near edge structure (XANES) was used to study the near edge mass-absorption coefficients of seven elements, such as, Ti, V, Fe, Co, Ni, Cu and Zn. It is well known that, on the near edge absorption of element, when incident X-ray a few eV change can make the absorption coefficient an order magnitude alteration. So that, there are only a few points mass-absorption coefficient at the near edge absorption and that always average value in published table. Our results showed a wide range of data, the total measured data of mass-absorption coefficient of the seven elements was about 505. The investigation confirmed that XANES is useful technique for multi-element absorption coefficient measurement. Details of experimental methods and results are given and discussed. The experimental work has been performed at Beijing Synchrotron Radiation Facility. The measured values were compared with the published data. Good agreement between experimental results and published data is obtained

X-ray absorption near edge structure (XANES) was used to study the near edge mass-absorption coefficients of seven elements, such as, Ti, V, Fe, Co, Ni, Cu and Zn. It is well known that, on the near edge absorption of element, when incident X-ray a few eV change can make the absorption coefficient an order magnitude alteration. So that, there are only a few points mass-absorption coefficient at the near edge absorption and that always average value in published table. Our results showed a wide range of data, the total measured data of mass-absorption coefficient of the seven elements was about 505. The investigation confirmed that XANES is useful technique for multi-element absorption coefficient measurement. Details of experimental methods and results are given and discussed. The experimental work has been performed at Beijing Synchrotron Radiation Facility. The measured values were compared with the published data. Good agreement between experimental results and published data is obtained.

Balancing Domain Decomposition by Constraints (BDDC) methods have proven to be powerful preconditioners for large and sparse linear systems arising from the finite element discretization of elliptic PDEs. Condition number bounds can be theoretically

The transport of heavy impurity element in to tokamak was studied theoretically. The viscosity coefficients of chromium impurities has been calculated in 13 and 21 moment approximation, in the limit of strong fields where is the gyrofrequency of species it was found that the off diagonal coefficient approximately tends to zero. This means that the friction force in the off-diagonal direction is very small, for the perpendicular viscosity coefficient the two approximation coincide to each other. 3 figs.

This book gives descriptions of basic finite elementmethod, which includes basic finite elementmethod and data, black box, writing of data, definition of VECTOR, definition of matrix, matrix and multiplication of matrix, addition of matrix, and unit matrix, conception of hardness matrix like spring power and displacement, governed equation of an elastic body, finite elementmethod, Fortran method and programming such as composition of computer, order of programming and data card and Fortran card, finite element program and application of nonelastic problem.

We develop and describe analytically a torsion method for measuring piezooptic coefficients associated with shear stresses. It is shown that the method enables to increase significantly the accuracy of determination of piezooptic coefficients. The method and the appropriate apparatus are verified experimentally on the example of LiNbO{sub 3} crystals. (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

Full Text Available The main purpose of investigations is the qualification how post-accident repair of model car body parts influence on the value of coefficient of restitution. Evaluation of impact energy absorption by model car body parts repaired with MIG welding (with and without micro-jet cooling was carried out. The results of investigations present that the value of coefficient of restitution changes with speed of impact. Coefficient of restitution is bigger for elements welded with micro-jet cooling than for element welded with ordinary method. This could have influence on passive safety of vehicle.

The authorized inputs of low-level radioactive waste into the Irish Sea from the British Nuclear Fuels plc reprocessing plant at Sellafield may be used to advantage to study the distribution and behaviour of artificial radionuclides in the marine environment. Apparent distribution coefficients (Ksub(d)) for the transuranium elements Np, Pu, Am and Cm have been determined by the analysis of environmental samples collected from UK coastal waters. The sampling methodology for obtaining suspended sediment-seawater Ksub(d)s by filtration is described and critically evaluated. Artefacts may be introduced in the sample collection stage. Ksub(d) values have also been determined for seabed sediment-interstitial waters and the precautions taken to preserve in-situ chemical conditions are described. Variations in Ksub(d) values are discussed in relation to distance from Sellafield, suspended load, redox conditions and oxidation state changes. (author)

The Monte Carlo (and Multi-level Monte Carlo) finite elementmethod can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

A new technique for consideration of dipole magnet ion-optical effect has been developed to study the problems of commutation and monochromatization of a charged particle beam. In a new form obtained are systematized coefficients of linear transformation (CLT) of the charged particle beam for radial and axial motions in a magnetic dipole element (MDE) including a dipole magnet and two gaps without magnetic field. Given is a method of graphic determination of MDE parameters and main CLT. The new form of coefficients and conditions of the transformations feasibility considerably facilitates the choice and calculation of dipole elements

The Monte Carlo (and Multi-level Monte Carlo) finite elementmethod can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

Full Text Available This research work is devoted to the footprint analysis of a steel-belted radial tyre (185/65R14 under vertical static load using finite elementmethod. Two models have been developed in which in the first model the tread patterns were replaced by simple ribs while the second model was consisted of details of the tread blocks. Linear elastic and hyper elastic (Arruda-Boyce material models were selected to describe the mechanical behavior of the reinforcing and rubbery parts, respectively. The above two finite element models of the tyre were analyzed under inflation pressure and vertical static loads. The second model (with detailed tread patterns was analyzed with and without friction effect between tread and contact surfaces. In every stage of the analysis, the results were compared with the experimental data to confirm the accuracy and applicability of the model. Results showed that neglecting the tread pattern design not only reduces the computational cost and effort but also the differences between computed deformations do not show significant changes. However, more complicated variables such as shape and area of the footprint zone and contact pressure are affected considerably by the finite element model selected for the tread blocks. In addition, inclusion of friction even in static state changes these variables significantly.

Full Text Available The article deals with the results of the research on the influence of aerodynamic coefficient values on the trajectory elements and the stability parameters of classic axisymmetric projectiles. It presents the characteristic functions of aerodynamic coefficients with regard to aerodynamic parameters and the projectile body shape. The trajectory elements of the model of classic axisymmetric projectiles and the analyses of their changes were presented with respect to the aerodynamic coefficient values. Introduction Classic axisymmetric projectiles fly through atmosphere using muzzle velocity as initial energy resource, so the aerodynamic force and moment have the most significant influence on the motion of projectiles. The aerodynamic force and moment components represented as aerodynamic coefficients depend on motion velocity i. e. flow velocity, the flow features produced by projectile shape and position in the flow, and angular velocity (rate of the body. The functional dependence of aerodynamic coefficients on certain influential parameters, such as angle of attack and angular velocity components is expressed by the derivative of aerodynamic coefficients. The determination of aerodynamic coefficients and derivatives enables complete definition of the aerodynamic force and moment acting on the classic projectile. The projectile motion problem is considered in relation to defining the projectile stability parameters and the conditions under which the stability occurs. The comparative analyses of aerodynamic coefficient values obtained by numerical methods, semi empirical calculations and experimental research give preliminary evaluation of the quality of the determined values. The flight simulation of the motion of a classic axisymetric projectile, which has the shape defined by the aerodynamic coefficient values, enables the comparative analyses of the trajectory elements and stability characteristics. The model of the classic projectile

Highlights: ► Beta attenuation coefficients of absorber materials were found in this study. ► For this process, a new method (timing method) was suggested. ► The obtained beta attenuation coefficients were compatible with the results from the traditional one. ► The timing method can be used to determine beta attenuation coefficient. - Abstract: Using a counting system with plastic scintillation detector, beta linear and mass attenuation coefficients were determined for bakelite, Al, Fe and plexiglass absorbers by means of timing method. To show the accuracy and reliability of the obtained results through this method, the coefficients were also found via conventional energy method. Obtained beta attenuation coefficients from both methods were compared with each other and the literature values. Beta attenuation coefficients obtained through timing method were found to be compatible with the values obtained from conventional energy method and the literature.

This thesis presents a two-grid algorithm based on Smoothed Aggregation Spectral Element Agglomeration Algebraic Multigrid (SA-{rho}AMGe) combined with adaptation. The aim is to build an efficient solver for the linear systems arising from discretization of second-order elliptic partial differential equations (PDEs) with stochastic coefficients. Examples include PDEs that model subsurface flow with random permeability field. During a Markov Chain Monte Carlo (MCMC) simulation process, that draws PDE coefficient samples from a certain distribution, the PDE coefficients change, hence the resulting linear systems to be solved change. At every such step the system (discretized PDE) needs to be solved and the computed solution used to evaluate some functional(s) of interest that then determine if the coefficient sample is acceptable or not. The MCMC process is hence computationally intensive and requires the solvers used to be efficient and fast. This fact that at every step of MCMC the resulting linear system changes, makes an already existing solver built for the old problem perhaps not as efficient for the problem corresponding to the new sampled coefficient. This motivates the main goal of our study, namely, to adapt an already existing solver to handle the problem (with changed coefficient) with the objective to achieve this goal to be faster and more efficient than building a completely new solver from scratch. Our approach utilizes the local element matrices (for the problem with changed coefficients) to build local problems associated with constructed by the method agglomerated elements (a set of subdomains that cover the given computational domain). We solve a generalized eigenproblem for each set in a subspace spanned by the previous local coarse space (used for the old solver) and a vector, component of the error, that the old solver cannot handle. A portion of the spectrum of these local eigen-problems (corresponding to eigenvalues close to zero) form the

In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.

First-principles calculations based on density functional theory have been used to calculate the temperature-dependent dilute tracer diffusion coefficients for 47 substitutional alloying elements in hexagonal closed packed (hcp) Mg by combining transition state theory and an 8-frequency model. The minimum energy pathways and the saddle point configurations during solute migration are calculated with the climbing image nudged elastic band method. Vibrational properties are obtained using the quasi-harmonic Debye model with inputs from first-principles calculations. An improved generalized gradient approximation of PBEsol is used in the present first-principles calculations, which is able to well describe both vacancy formation energies and vibrational properties. It is found that the solute diffusion coefficients in hcp Mg are roughly inversely proportional to the bulk modulus of the dilute alloys, which reflects the solutes' bonding to Mg. Transition metal elements with d electrons show strong interactions with Mg and have large diffusion activation energies. Correlation effects are not negligible for solutes Ca, Na, Sr, Se, Te, and Y, in which the direct solute migration barriers are much smaller than the solvent (Mg) migration barriers. Calculated diffusion coefficients are in remarkable agreement with available experimental data in the literature.

Many students, engineers, scientists and researchers have benefited from the practical, programming-oriented style of the previous editions of Programming the Finite ElementMethod, learning how to develop computer programs to solve specific engineering problems using the finite elementmethod. This new fifth edition offers timely revisions that include programs and subroutine libraries fully updated to Fortran 2003, which are freely available online, and provides updated material on advances in parallel computing, thermal stress analysis, plasticity return algorithms, convection boundary c

Recently, a high-purity aluminium has been used in semi-coductor device, so on. It was required that trace impurities should be reduced and that its content should be quantitatively evaluated. In this study, distribution patterns of many trace impurities in 99.999 % aluminium ingots, which was purified using a normal freezing method, were evaluated by an INAA. The effective distribution coefficient k for each detected elements was calculated using a theoretical distribution equation in the normal freezing method. As a result, the elements of k 1 was Hf. Especially, La, Sm, U and Th could be effectively purified, but Sc and Hf could be scarcely purified. Further more, it was found that the slower freezing gave the effective distribution coefficient close to the equilibrium distribution coefficient, and that the effective distribution coefficient became smaller with the larger atomic radius. (author)

We propose a convenient formulation of elemental transport coefficients in chemically reacting and plasma flows locally approaching thermodynamic equilibrium. A set of transport coefficients for elemental diffusion velocities, heat flux, and electric current is introduced. These coefficients relate the transport fluxes with the electric field and with the spatial gradients of elemental fractions, pressure, and temperature. The proposed formalism based on chemical elements and fully symmetric with the classical transport theory based on chemical species, is particularly suitable to model mixing and demixing phenomena due to diffusion of chemical elements. The aim of this work is threefold: to define a simple and rigorous framework suitable for numerical implementation, to allow order of magnitude estimations and qualitative predictions of elemental transport phenomena, and to gain a deeper insight into the physics of chemically reacting flows near local equilibrium.

New finite-elementmethods are proposed for mixed variational formulations. The methods are constructed by adding to the classical Galerkin method various least-squares like terms. The additional terms involve integrals over element interiors, and include mesh-parameter dependent coefficients. The methods are designed to enhance stability. Consistency is achieved in the sense that exact solutions identically satisfy the variational equations.Applied to several problems, simple finite-element interpolations are rendered convergent, including convenient equal-order interpolations generally unstable within the Galerkin approach. The methods are subdivided into two classes according to the manner in which stability is attained: (1) circumventing Babuska-Brezzi condition methods; (2) satisfying Babuska-Brezzi condition methods. Convergence is established for each class of methods. Applications of the first class of methods to Stokes flow and compressible linear elasticity are presented. The second class of methods is applied to the Poisson, Timoshenko beam and incompressible elasticity problems. Numerical results demonstrate the good stability and accuracy of the methods, and confirm the error estimates

A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.

Full Text Available The subject of the paper includes theoretical considerations, the conducting of experimental tests, and the analysis of exposed test results related to determination of the coefficient of static friction of previously heat-treated contact pairs. One contact element is previously, before the procedure of determining the coefficient of static friction, heated at temperatures in the range of ambient temperature to 280°C and then cooled down to ambient temperature. The results of experimental tests of five different materials show that depending on the heat treatment of one contact element, there is a significant decrease in the coefficient of static friction. The authors of the paper consider that the reasons for the decreasing coefficient of static friction are related to oxide formation and changes in the surface layer of the contact element which is previously heat-treated.

This book covers all basic areas of mechanical engineering, such as fluid mechanics, heat conduction, beams, and elasticity with detailed derivations for the mass, stiffness, and force matrices. It is especially designed to give physical feeling to the reader for finite element approximation by the introduction of finite elements to the elevation of elastic membrane. A detailed treatment of computer methods with numerical examples are provided. In the fluid mechanics chapter, the conventional and vorticity transport formulations for viscous incompressible fluid flow with discussion on the method of solution are presented. The variational and Galerkin formulations of the heat conduction, beams, and elasticity problems are also discussed in detail. Three computer codes are provided to solve the elastic membrane problem. One of them solves the Poisson’s equation. The second computer program handles the two dimensional elasticity problems, and the third one presents the three dimensional transient heat conducti...

The most important commercialized methods of attenuation correction in SPECT are based on attenuation coefficient map from a transmission imaging method. The transmission imaging system can be the linear source of radioelement or a X-ray CT system. The image of transmission imaging system is not useful unless to replacement of the attenuation coefficient or CT number with the attenuation coefficient in SPECT energy. In this paper we essay to evaluate the validity and estimate the error of the most used method of this transformation. The final result shows that the methods which use a linear or multi-linear curve accept a error in their estimation. The value of mA is not important but the patient thickness is very important and it can introduce a error more than 10 percent in the final result

The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite elementmethods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite ElementMethod which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite ElementMethod. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the

An electrophoretic method for measuring ion diffusion coefficients in aqueous solutions is developed. The value of the diffusion coefficient can be determined from the linear relationship between the square standard deviation of the electrophoretic zone and the time from the start of the diffusion process. Using the device for horizontal zone electrophoresis in a free electrolyte, a series of diffusion experiments are performed with no-carrier-added radionuclides in microconcentrations (10 -9 - 10 -10 M). Diffusion coefficients of 111 In(III), 175 Hf(IV) and 237 Pu(VI) ions at 25 0 C are determined in nitric acid media. Simultaneous determination of the diffusion coefficient and electrophoretic mobility allows one to calculate the effective charge of the investigated ions in accordance with the Nernst-Einstein law

These Lecture Notes discuss concepts of `self-adaptivity' in the numerical solution of differential equations, with emphasis on Galerkin finite elementmethods. The key issues are a posteriori error estimation and it automatic mesh adaptation. Besides the traditional approach of energy-norm error control, a new duality-based technique, the Dual Weighted Residual method for goal-oriented error estimation, is discussed in detail. This method aims at economical computation of arbitrary quantities of physical interest by properly adapting the computational mesh. This is typically required in the design cycles of technical applications. For example, the drag coefficient of a body immersed in a viscous flow is computed, then it is minimized by varying certain control parameters, and finally the stability of the resulting flow is investigated by solving an eigenvalue problem. `Goal-oriented' adaptivity is designed to achieve these tasks with minimal cost. At the end of each chapter some exercises are posed in order ...

lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible

log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible

Full Text Available Correlation coefficient is one of the broadly use indexes in multi-criteria decision-making (MCDM processes. However, some important issues related to correlation coefficient utilization within probabilistic hesitant fuzzy environments remain to be addressed. The purpose of this study is introduced a MCDM method based on correlation coefficients utilize probabilistic hesitant fuzzy information. First, the covariance and correlation coefficient between two PHFEs is introduced, the properties of the proposed covariance and correlation coefficient are discussed. In addition, the northwest corner rule to obtain the expected mean related to the multiply of two PHFEs is introduced. Second, the weighted correlation coefficient is proposed to make the proposed MCDM method more applicable. And the properties of the proposed weighted correlation coefficient are also discussed. Finally, an illustrative example is demonstrated the practicality and effectiveness of the proposed method. An illustrative example is presented to demonstrate the correlation coefficient propose in this paper lies in the interval [−1, 1], which not only consider the strength of relationship between the PHFEs but also whether the PHFEs are positively or negatively related. The advantage of this method is it can avoid the inconsistency of the decision-making result due to the loss of information.

This work deals with a measurement of heat transfer from a heated flat plate on which a synthetic jet impacts perpendicularly. Measurement of a heat transfer coefficient (HTC) is carried out using the hot wire anemometry method with glue film probe Dantec 55M47. The paper brings also results of velocity profiles measurements and turbulence intensity calculations.

Full Text Available This work deals with a measurement of heat transfer from a heated flat plate on which a synthetic jet impacts perpendicularly. Measurement of a heat transfer coefficient (HTC is carried out using the hot wire anemometry method with glue film probe Dantec 55M47. The paper brings also results of velocity profiles measurements and turbulence intensity calculations.

This paper presents a hot stamping experimentation and three methods for calculating the Interfacial Heat Transfer Coefficient (IHTC) of 22MnB5 boron steel. Comparison of the calculation results shows an average error of 7.5% for the heat balance method, 3.7% for the Beck's nonlinear inverse estimation method (the Beck's method), and 10.3% for the finite-element-analysis-based optimization method (the FEA method). The Beck's method is a robust and accurate method for identifying the IHTC in hot stamping applications. The numerical simulation using the IHTC identified by the Beck's method can predict the temperature field with a high accuracy. - Highlights: • A theoretical formula was derived for direct calculation of IHTC. • The Beck's method is a robust and accurate method for identifying IHTC. • Finite elementmethod can be used to identify an overall equivalent IHTC

A method was proposed for determination mass absorption coefficient of gamma rays for compounds, alloys and mixtures. It is based on simulating interaction processes of gamma rays with target elements having atomic numbers from Z=1 to Z=92 using the MCSHAPE software. Intensities of Compton scattered gamma rays at saturation thicknesses and at a scattering angle of 90° were calculated for incident gamma rays of different energies. The obtained results showed that the intensity of Compton scattered gamma rays at saturations and mass absorption coefficients can be described by mathematical formulas. These were used to determine mass absorption coefficients for compound, alloys and mixtures with the knowledge of their Compton scattered intensities. The method was tested by calculating mass absorption coefficients for some compounds, alloys and mixtures. There is a good agreement between obtained results and calculated ones using WinXom software. The advantages and limitations of the method were discussed. - Highlights: • Compton scattering of γ−rays was used for determining mass absorption coefficient. • Scattered intensities were determined by the MCSHAPE software. • Mass absorption coefficients were determined for some compounds, mixtures and alloys. • Mass absorption coefficients were calculated by Winxcom software. • Good agreements were found between determined and calculated results

The atomic absorption coefficient, μ a , and the mass absorption coefficient, μ/ρ, have been calculated for the elements Li to Bi and U, based on both photoelectric and scattering effects. Tables include the μ a and μ/ρ values (i) at 0.01 A intervals in the wavelength range from 0.1 to 2.89 A and (ii) at 0.0001 A intervals in the neighborhood of the K, L 1 , L 2 , and L 3 absorption edges. (author)

The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs.

The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs

An application of the method introduced by 'Fischer, H.B. - 1968 - Dispersion prediction in natural streams Journal of the Sanitary Engineering Division, ACSE, vol. 94 n 5A5. Proc. Paper 6169 pp 927-943.', for the calculation of the dispersion coefficient, based on Taylor's model is made. The aim is to develop a method which avoids the necessity of having an instantaneous impulse at the entrance section (1st section) of the system being measured. The dispersion coefficient is determined by curve fitting the experimental response in the 2nd secton and that obtained with the model by means of the non-linear least-squares method. The same method is applied with the residence time distribution function. The theoretical differences between these two function and their results are discussed. By adjusting the two model parameters in all these calculations, the dispersion coefficient and the mean velocity are determined, simultaneously. A comparison between the moment's method and Fischer's formulation is also done using the same experimental data. (E.G.) [pt

Moisture accumulation and transport in the building barriers is an important feature that influences building performance, causing serious exploitation problems as increased energy use, mold and bacteria growth, decrease of indoor air parameters that may lead to sick building syndrome (SBS). One of the parameters that is used to describe moisture characteristic of the material is water absorption coefficient being the measure of capillary behavior of the material as a function of time and the surface area of the specimen. As usual it is determined using gravimetric methods according to EN 1925:1999 standard. In this article we demonstrate the possibility of determination of water absorption coefficient of autoclaved aerated concrete (AAC) using the Time Domain Reflectometry (TDR) method. TDR is an electric technique that had been adopted from soil science and can be successfully used for real-time monitoring of moisture transport in building materials and envelopes. Data achieved using TDR readouts show high correlation with standard method of moisture absorptivity coefficient determination.

Models describing the fuel-to-cladding heat transfer coefficient in a reactor fuel element are reviewed critically. A new model is developed with contributions from solid, fluid and radiation heat transfer components. It provides a consistent description of the transition from an open gap to the contact case. Model parameters are easily available and highly independent of different combinations of material surfaces. There are no restrictions for fast transients. The model parameters are fitted to 388 data points under reactor conditions. For model verification another 274 data points of steel-steel and aluminium-aluminium interfaces, respectively, were used. The fluid component takes into account peak-to-peak surface roughnesses and, approximatively, also the wavelengths of surface roughnesses. For minor surface roughnesses normally prevailing in reactor fuel elements the model asymptotically yields Ross' and Stoute's model for the open gap, which is thus confirmed. Experimental contact data can be interpreted in very different ways. The new model differs greatly from Ross' and Stoute's contact term and results in better correlation coefficients. The numerical algorithm provides an adequate representation for calculating the fuel-to-cladding heat transfer coefficient in large fuel element structural analysis computer systems. (orig.) [de

In linear algebra, one can associate an equation to each square matrix: its characteristic equation or secular equation. Starting from this equation, the one characteristic polynomial that codes several important properties of the matrix is obtained: its own values, it determinant and it appearance. The first method to calculate those coefficients of this polynomial were proposed by the french astronomer Urbain Jean Joseph Le Verrier (1811-1877), from then on, many methods have intended to calculate these coefficients. In this work the author proposes a new one method and a bibliographical citation is given where the calculations with others methods that know each other for it, taking like reference the matrix used by Le Verrier are explained. It was concluded that it here proposed, besides being the only mexican method that is knew, has the advantage of being very easy of understanding and of calculating well, in the operations that it carries out, it doesn't use the division and it avoids fractions in matrices whose entrances are whole. This has a great importance for their use in the classroom for their great didactic value and in nuclear reactors and Genetic Engineering. (Author)

Certain symmetry properties of standard quantities of the atomic shell theory for LS coupling are studied, namely, the commutation of quantum numbers of spin and quasispin in genealogical coefficients and in submatrix elements of irreducible tensor operators. The method of second quantization and quasispin has been used for obtaining new relations between genealogical coefficients. The similar relations have been also found for the submatrix elements of the irreducible tensor operators, as well as for genealogical coefficients with two and more split-off electrons. For the first time in special cases for the quantities under study the explicit algebraic expressions are obtained

friction coefficient f of the K1000 HDS are further calculated to be 0.336 by stress coefficient k f . It is very important that the research method of friction coefficient put forward by this paper for the first time. The method can provide an exact basis for HDS design and structure selection and can provide a guarantee for the safe operation of the reactor.

friction coefficient f of the K1000 HDS are further calculated to be 0.336 by stress coefficient k{sub f}. It is very important that the research method of friction coefficient put forward by this paper for the first time. The method can provide an exact basis for HDS design and structure selection and can provide a guarantee for the safe operation of the reactor.

The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi-analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections are calculated directly from a simple analytical expression. Atomic cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within equal 1%, in the energy range from 1 KeV to 1 GeV. The complete source listing of the program PHOTAC is included

Highlights: • We attempted and optimized erbium loading methods to improve reactivity coefficients for LRSF-HTGR. • We elucidated the mechanism of the improvements for each erbium loading method by using the Bondarenko approach. • We concluded the erbium loading method by embedding into graphite shaft is preferable. - Abstract: Erbium loading methods are investigated to improve reactivity coefficients of Low Radiotoxic Spent Fuel High Temperature Gas-cooled Reactor (LRSF-HTGR). Highly enriched uranium is used for fuel to reduce the generation of toxicity from uranium-238. The power coefficients are positive without the use of any additive. Then, the erbium is loaded into the core to obtain negative reactivity coefficients owing to the large resonance the peak of neutron capture reaction of erbium-167. The loading methods are attempted to find the suitable method for LRSF-HTGR. The erbium is mixed in a CPF fuel kernel, loaded by binary packing with fuel particles and erbium particles, and embedded into the graphite shaft deployed in the center of the fuel compact. It is found that erbium loading causes negative reactivity as moderator temperature reactivity, and from the viewpoint of heat transfer, it should be loaded into fuel pin elements for pin-in-block type fuel. Moreover, the erbium should be incinerated slowly to obtain negative reactivity coefficients even at the End Of Cycle (EOC). A loading method that effectively causes self-shielding should be selected to avoid incineration with burn-up. The incineration mechanism is elucidated using the Bondarenko approach. As a result, it is concluded that erbium embedded into graphite shaft is preferable for LRSF-HTGR to ensure that the reactivity coefficients remain negative at EOC.

In this paper a new neutron transport method, called discrete elements (L N ) is derived and compared to discrete ordinates methods, theoretically and by numerical experimentation. The discrete elementsmethod is based on discretizing the Boltzmann equation over a set of elements of angle. The discrete elementsmethod is shown to be more cost-effective than discrete ordinates, in terms of accuracy versus execution time and storage, for the cases tested. In a two-dimensional test case, a vacuum duct in a shield, the L N method is more consistently convergent toward a Monte Carlo benchmark solution

New finite elements are needed as well in research as in industry environments for the development of virtual prediction techniques. The design and implementation of novel finite elements for specific purposes is a tedious and time consuming task, especially for nonlinear formulations. The automation of this process can help to speed up this process considerably since the generation of the final computer code can be accelerated by order of several magnitudes. This book provides the reader with the required knowledge needed to employ modern automatic tools like AceGen within solid mechanics in a successful way. It covers the range from the theoretical background, algorithmic treatments to many different applications. The book is written for advanced students in the engineering field and for researchers in educational and industrial environments.

In the last few years, domain decomposition methods, previously developed and tested for standard finite elementmethods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite elementmethods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.

Step drawdown test (SDT) are essential for hydrogeologist to determine aquifer loss and well loss parameters. In a SDT, different series of constant-discharges with incremental rates are conducted to obtain incremental drawdown into the pumping well. Pumping well efficiency (if the well is properly developed and designed), aquifer characteristics (transmissivity, storativity) and discharge-drawdown relationship can be derived from SDT. The well loss parameter directly associate with the well efficiency. The main problem is to determine the correct well loss parameter in order to estimate aquifer characteristics. Walton (1962) stated that the interpretation of the well efficiency is possible to determine the nonlinear head loss coefficient (C) with p equals to 2 and Walton (1962) presented a criteria that suggested the following terms: If C is less than 1800 m2/s5, the is properly developed and designed, If C is ranged from 1800 m2/s5 to 3600 m2/s5, the well has a mild deterioration, If C is greater than 3600 m2/s5, the well has a severe clogging. Until now, several well-known computer techniques such as Aqutesolv, AquiferWin32 , AquifertestPro can be found in the literature to evaluate well efficiency when exponential parameter (p) equals to 2. However, there exist a lack of information to evaluate well efficiency for different number of exponential parameter (p). Strategic Water Storage & Recovery (SWSR) Project in Liwa, Abu Dhabi is the leading and unique hydrogeology project in the world because of its both financial and scientific dimension. A total of 315 recovery wells have been drilled in pursuance of the scope of the SWSR project. A Universal Well Efficiency Criteria (UWEC) is developed using 315 Step Drawdown Test (SDT). UWEC is defined for different number of head loss equation coefficients. The results reveal that there is a strong correlation between non-linear well loss coefficient (C) and exponential parameter (p) up to a coefficient of determination

Simple diffusion theory cannot be used to evaluate control rod worths in thermal neutron reactors because of the strongly absorbing character of the control material. However, reliable control rod worths can be obtained within the framework of diffusion theory if the control material is characterized by a set of mesh-dependent effective diffusion parameters. For thin slab absorbers the effective diffusion parameters can be expressed as functions of a suitably-defined pair of 'blackness coefficients'. Methods for calculating these blackness coefficients in the P1, P3, and P5 approximations, with and without scattering, are presented. For control elements whose geometry does not permit a thin slab treatment, other methods are needed for determining the effective diffusion parameters. One such method, based on reaction rate ratios, is discussed. (author)

Th initial experiment and method for the nondestructive determination of a fuel element burnup is given. The method eliminates the error which originates from the unknown local dependency of the attenuation coefficient for gamma rays in fuel. (author)

A method of lightening a radiation-darkened optical element in which visible optical energy or electromagnetic radiation having a wavelength in the range of from about 2000 to about 20,000 angstroms is directed into the radiation-darkened optical element; the method may be used to lighten radiation-darkened optical element in-situ during the use of the optical element to transmit data by electronically separating the optical energy from the optical output by frequency filtering, data cooling, or interlacing the optic energy between data intervals

_i \\}_{i=1}^{n}$ of the frame and theorthogonal projection $P_n$ onto its span. For $f \\in \\h ,P_nf$ has a representation as a linear combination of $f_i , i=1,2,..n,$and the corresponding coefficients can be calculated using finite dimensionalmethods. We find conditions implying that those coefficients...

Starting at 3-loop order, the massive Wilson coefficients for deep-inelastic scattering and the massive operator matrix elements describing the variable flavor number scheme receive contributions of Feynman diagrams carrying quark lines with two different masses. In the case of the charm and bottom quarks, the usual decoupling of one heavy mass at a time no longer holds, since the ratio of the respective masses, η=m{sup 2}{sub c}/m{sup 2}{sub b}∝1/10, is not small enough. Therefore, the usual variable flavor number scheme (VFNS) has to be generalized. The renormalization procedure in the two-mass case is different from the single mass case derived earlier (I. Bierenbaum, J: Bluemlein, S. Klein, 2009). We present the moments N=2,4 and 6 for all contributing operator matrix elements, expanding in the ratio η. We calculate the analytic results for general values of the Mellin variable N in the flavor non-singlet case, as well as for transversity and the matrix element A{sup (3)}{sub gq}. We also calculate the two-mass scalar integrals of all topologies contributing to the gluonic operator matrix element A{sub gg}. As it turns out, the expansion in η is usually inapplicable for general values of N. We therefore derive the result for general values of the mass ratio. From the single pole terms we derive, now in a two-mass calculation, the corresponding contributions to the 3-loop anomalous dimensions. We introduce a new general class of iterated integrals and study their relations and present special values. The corresponding functions are implemented in computer-algebraic form.

Full Text Available Starting at 3-loop order, the massive Wilson coefficients for deep-inelastic scattering and the massive operator matrix elements describing the variable flavor number scheme receive contributions of Feynman diagrams carrying quark lines with two different masses. In the case of the charm and bottom quarks, the usual decoupling of one heavy mass at a time no longer holds, since the ratio of the respective masses, η=mc2/mb2∼1/10, is not small enough. Therefore, the usual variable flavor number scheme (VFNS has to be generalized. The renormalization procedure in the two-mass case is different from the single mass case derived in [1]. We present the moments N=2,4 and 6 for all contributing operator matrix elements, expanding in the ratio η. We calculate the analytic results for general values of the Mellin variable N in the flavor non-singlet case, as well as for transversity and the matrix element Agq(3. We also calculate the two-mass scalar integrals of all topologies contributing to the gluonic operator matrix element Agg. As it turns out, the expansion in η is usually inapplicable for general values of N. We therefore derive the result for general values of the mass ratio. From the single pole terms we derive, now in a two-mass calculation, the corresponding contributions to the 3-loop anomalous dimensions. We introduce a new general class of iterated integrals and study their relations and present special values. The corresponding functions are implemented in computer-algebraic form.

As it is inconvenient to use elements like hydrogen, carbon and oxygen in pure forms for measurement of their gamma mass-attenuation coefficients, the measurements are to be done indirectly, by using compounds of the elements or a mixture of them. We give here a simple method of measuring the total mass-attenuation coefficients μ/ρ of the elements in a compound simultaneously and in a single experiment through the measurements of the μ/ρ values of the concerned compounds and using the mixture rule. The method is applied for the measurement of μ/ρ of hydrogen, carbon and oxygen by using acetone, ethanol and 1-propanol. Our results (for E γ =0.123-1.33 MeV) are seen to be in better agreement with the theoretical values of Hubbell and Seltzer (1995) [Hubbell J.H. and Seltzer S.M. (1995). Tables of X-ray mass attenuation coefficients and mass energy-absorption coefficients 1 keV to 20 MeV for elements Z=1 to 92 and 48 additional substances of dosimetric interest. NISTIR 5632] as compared to the results of El-Kateb and Abdul-Hamid (1991) [El-Kateb, A.H., Abdul-Hamid, A.S., 1991. Photon attenuation coefficient study of some materials containing hydrogen, carbon, and oxygen. Appl. Rad. Isot. 42, 303-307

Radionuclide concentrations of a number of elements (Am, Pu, U, Pa, Th, Ac, Ra, Po, Pb, Cs, and Sr) have been measured in the water and sediments of a group of alkaline lakes in the western USA. These data demonstrate greatly enhanced soluble phase concentrations of elements with oxidation states of III, IV, V, and VI as the result of carbonate complexing. Dissolved concentrations of isotopes of U, Pa, and Th in a lake with pH = 10 and a total inorganic carbon concentration of 4 x 10 -1 moles/1 were greater than those in sea water (pH = 8, ΣCO 2 = 2 x 10 -3 moles/1) by order of magnitude for 233 U, 238 U (--10 2 ), 231 Pa, 228 Th, 230 Th (--10 3 ) and 22 Th (--10 5 ). Concentrations of fallout /sup 239,240/Pu in the more alkaline lakes were equivalent to effective distribution coefficients of --10 3 , about a factor of 10 2 lower than in most other natural lakes, rivers, estuaries and coastal marine waters. Measurements of radionuclides in natural systems are essential for assessment of the likely fate of radionuclides which may be released from high level waste repositories to ground water. Laboratory-scale experiments using tracer additions of radionuclides to mixtures of water and sediment yielded distribution coefficients which were significantly different from those derived from field measurements (10 1 -10 2 lower for Po and Pu). Order of magnitude calculations from thermodynamic data of expected maximum U and Th concentrations, limited by pure phase solubilities, suggest that carbonate complexing can enhance solubility by many orders of magnitude in natural waters, even at relatively low carbonate ion concentrations

Full Text Available The paper presents the method of heat transfer coefficient determination for boiling research during FC-72 flow in the minichannels, each 1.7 mm deep, 24 mm wide and 360 mm long. The heating element was the thin foil, enhanced on the side which comes into contact with fluid in the minichannels. Local values of the heat transfer coefficient were calculated from the Robin boundary condition. The foil temperature distribution and the derivative of the foil temperature were obtained by solving the two-dimensional inverse heat conduction problem, due to measurements obtained by IRT. Calculations was carried out by the method based on the approximation of the solution of the problem using a linear combination of Trefftz functions. The basic property of this functions is they satisfy the governing equation. Unknown coefficients of linear combination of Trefftz functions are calculated from the minimization of the functional that expresses the mean square error of the approximate solution on the boundary. The results presented as IR thermographs, two-phase flow structure images and the heat transfer coefficient as a function of the distance from the channel inlet, were analyzed.

This book, the first printing of which was published as Volume 31 of the Encyclopaedia of Mathematical Sciences, contains a survey of the modern theory of general linear partial differential equations and a detailed review of equations with constant coefficients. Readers will be interested in an introduction to microlocal analysis and its applications including singular integral operators, pseudodifferential operators, Fourier integral operators and wavefronts, a survey of the most important results about the mixed problem for hyperbolic equations, a review of asymptotic methods including short wave asymptotics, the Maslov canonical operator and spectral asymptotics, a detailed description of the applications of distribution theory to partial differential equations with constant coefficients including numerous interesting special topics.

The behaviour of the weak solution of the Stokes problem on a polygon is considered with emphasis on the maximal regularity of the solution and on global formulae for the coefficients of singularities. This regularity leads to a slow convergent mixed finite elementmethod of fractional order less than one while the use of the above formulae provides better approximations for the solution and for the coefficients. (author). 32 refs

Unsteady solutions for the aerodynamic coefficients of a thin airfoil in compressible subsonic or supersonic flows are studied. The lift, the pitch moment, and pressure coefficients are obtained numerically for the following motions: the indicial response (unit step function) of the airfoil, i.e., a sudden change in the angle of attack; a thin airfoil penetrating into a sharp edge gust (for several gust speed ratios); a thin airfoil penetrating into a one-minus-cosine gust and sinusoidal gust...

This invention relates to nuclear equipment and more particularly to methods and apparatus for the non-destructive inspection, manipulation, disassembly and assembly of reactor fuel elements and the like. (author)

This book systematically introduces the research work on the Finite ElementMethod completed over the past 25 years. Original theoretical achievements and their applications in the fields of structural engineering and computational mechanics are discussed.

ABSTRACT: In this work, we have discussed what Finite ElementMethod (FEM) is, its historical development, advantages and ... residual procedures, are examples of the direct approach ... The paper centred on the "stiffness and deflection of ...

A new edition of the leading textbook on the finite elementmethod, incorporating major advancements and further applications in the field of electromagnetics The finite elementmethod (FEM) is a powerful simulation technique used to solve boundary-value problems in a variety of engineering circumstances. It has been widely used for analysis of electromagnetic fields in antennas, radar scattering, RF and microwave engineering, high-speed/high-frequency circuits, wireless communication, electromagnetic compatibility, photonics, remote sensing, biomedical engineering, and space exploration. The

1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...

With elemental analysis we mean the determination of which chemical elements are present in a sample and of their concentration. This is an old and important problem in chemistry. The earliest methods were purely chemical and many such methods are still used. However, various methods based on physical principles have gradually become more and more important. One such method is neutron activation. When the sample is bombarded with neutrons it becomes radioactive and the various radioactive isotopes produced can be identified by the radiation they emit. From the measured intensity of the radiation one can calculate how much of a certain element that is present in the sample. Another possibility is to study the light emitted when the sample is excited in various ways. A spectroscopic investigation of the light can identify the chemical elements and allows also a determination of their concentration in the sample. In the same way, if a sample can be brought to emit X-rays, this radiation is also characteristic for the elements present and can be used to determine the elemental concentration. One such X-ray method which has been developed recently is PIXE. The name is an acronym for Particle Induced X-ray Emission and indicates the principle of the method. Particles in this context means heavy, charged particles such as protons and a-particles of rather high energy. Hence, in PIXE-analysis the sample is irradiated in the beam of an accelerator and the emitted X-rays are studied. (author)

Addresses the needs of the computational mechanics research community in terms of information on boundary integral equation-based methods and techniques applied to a variety of fields. This book collects both original and review articles on contemporary Boundary ElementMethods (BEM) as well as on the Mesh Reduction Methods (MRM).

The boundary elementmethod provides an excellent platform for learning and teaching a computational method for solving problems in physical and engineering science. However, it is often left out in many undergraduate courses as its implementation is deemed to be difficult. This is partly due to the perception that coding the method requires…

This paper summarizes the mathematical basis of the finite elementmethod. Attention is drawn to the natural development of the method from an engineering analysis tool into a general numerical analysis tool. A particular application to the stress analysis of rubber materials is presented. Special advantages and issues associated with the method are mentioned. (author). 4 refs., 3 figs

The elements of some ore samples (including primary ores in the mine) have been measured by using the Si-PIN detector and the portable high energy resolution XRF analyzer with an embedded computer constituting the intensity correction model of influence coefficientmethod. By comparing to the result by chemical analysis, the conclusion could be made as follows: the maximal relative error of the content of the element Fe with the content between 20% to 55% is not more than 4.74%, the maximal relative error of the content of the element Cu with the content between 182ppm to 2400ppm is not more than 24. 70% and not more than 7.46% with the content between 2400ppm to 3600ppm, the maximal relative error of the content of the element Zn with the content between 556ppm to 3200ppm is not more than 25. 93% and not more than 4.74% with the content between 3200ppm to 21600ppm, the maximal relative error of the content of the element Pb with the content between 5900ppm to 204200ppm is not more than 23.80% and not more than 13.79% with the content between 204200ppm to 511200ppm. In this way, this model could overcome the influence of matrix effect from base material components commendably and guide the work of mineral beneficiation in the mine effectively. (authors)

A discontinuous Galerkin finite elementmethod (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results

We present a two-scale finite elementmethod for solving Brinkman's equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We

The use of electrochemical methods for the study of complexing, separation of rare element mixtures, their preparation in lower oxidation states, and also for the development of highly sensitive methods of the element determination, is considered in the review. Voltammetric methods of Pt, Au, Re determination are considered, as well as Re preparation in oxidation states +5, +3 by electrolytic methods. The possibility to use electrodialysis methods for purification of insoluble compounds of rare earths (RE) from impurities, and for separation of Re and Mo with simultaneous purification of Re from K and other elements is shown. The application of high-frequency conductometry to analytic chemistry and to the study of Th, In, RE complexing and kinetics of the reactions is considered

A new discrete elements (L/sub N/) transport method is derived and compared to the discrete ordinates S/sub N/ method, theoretically and by numerical experimentation. The discrete elementsmethod is more accurate than discrete ordinates and strongly ameliorates ray effects for the practical problems studied. The discrete elementsmethod is shown to be more cost effective, in terms of execution time with comparable storage to attain the same accuracy, for a one-dimensional test case using linear characteristic spatial quadrature. In a two-dimensional test case, a vacuum duct in a shield, L/sub N/ is more consistently convergent toward a Monte Carlo benchmark solution than S/sub N/, using step characteristic spatial quadrature. An analysis of the interaction of angular and spatial quadrature in xy-geometry indicates the desirability of using linear characteristic spatial quadrature with the L/sub N/ method

This book presents practical applications of the finite elementmethod to general differential equations. The underlying strategy of deriving the finite element solution is introduced using linear ordinary differential equations, thus allowing the basic concepts of the finite element solution to be introduced without being obscured by the additional mathematical detail required when applying this technique to partial differential equations. The author generalizes the presented approach to partial differential equations which include nonlinearities. The book also includes variations of the finite elementmethod such as different classes of meshes and basic functions. Practical application of the theory is emphasised, with development of all concepts leading ultimately to a description of their computational implementation illustrated using Matlab functions. The target audience primarily comprises applied researchers and practitioners in engineering, but the book may also be beneficial for graduate students.

The use of radioactive indicators for the determination of the distribution coefficients of 46 elements on SnO 2 in 0.1N HNO 3 -acetone media is described. The determination has been carried out in static conditions: labelled element solution has been agitated with SnO 2 for two hours; the elements have been labelled with radioisotopes generally obtained by (n, γ) reaction, by irradiating a part of the used salt in EL 3 or OSIRIS reactor in the C.E.N. Saclay (France). Results show that the elements may be classified into several groups, according to their oxidation state. (T.I.)

Traditionally spectral methods in fluid dynamics were used in direct and large eddy simulations of turbulent flow in simply connected computational domains. The methods are now being applied to more complex geometries, and the spectral/hp elementmethod, which incorporates both multi-domain spectral methods and high-order finite elementmethods, has been particularly successful. This book provides a comprehensive introduction to these methods. Written by leaders in the field, the book begins with a full explanation of fundamental concepts and implementation issues. It then illustrates how these methods can be applied to advection-diffusion and to incompressible and compressible Navier-Stokes equations. Drawing on both published and unpublished material, the book is an important resource for experienced researchers and for those new to the field.

A new technique is developed with an alternative formulation of the response matrix method implemented with the finite element scheme. Two types of response matrices are generated from the Galerkin solution to the weak form of the diffusion equation subject to an arbitrary current and source. The piecewise polynomials are defined in two levels, the first for the local (assembly) calculations and the second for the global (core) response matrix calculations. This finite element response matrix technique was tested in two 2-dimensional test problems, 2D-IAEA benchmark problem and Biblis benchmark problem, with satisfatory results. The computational time, whereas the current code is not extensively optimized, is of the same order of the well estabilished coarse mesh codes. Furthermore, the application of the finite element technique in an alternative formulation of response matrix method permits the method to easily incorporate additional capabilities such as treatment of spatially dependent cross-sections, arbitrary geometrical configurations, and high heterogeneous assemblies. (Author) [pt

A completely boundary-free maximum principle for the first-order Boltzmann equation is derived from the completely boundary-free maximum principle for the mixed-parity Boltzmann equation. When continuity is imposed on the trial function for directions crossing interfaces the completely boundary-free principle for the first-order Boltzmann equation reduces to a maximum principle previously established directly from first principles and indirectly by the Euler-Lagrange method. Present finite elementmethods for the first-order Boltzmann equation are based on a weighted-residual method which permits the use of discontinuous trial functions. The new principle for the first-order equation can be used as a basis for finite-elementmethods with the same freedom from boundary conditions as those based on the weighted-residual method. The extremum principle as the parent of the variationally-derived weighted-residual equations ensures their good behaviour. (author)

A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating theses coefficients, which are the differential and the generalized perturbation theory methods. The method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivatives of the integral parameter, Φ, with respect to σ are calculated using the perturbation method and the functional derivatives of this generic integral parameter with respect to σ and Φ are calculated using the differential method. (author)

A two dimensional initial strain direct boundary elementmethod is proposed to numerically model the creep behaviour. The boundary of the body is discretized into quadratic element and the domain into quadratic quadrilaterals. The variables are also assumed to have a quadratic variation over the elements. The boundary integral equation is solved for each boundary node and assembled into a matrix. This matrix is solved by Gauss elimination with partial pivoting to obtain the variables on the boundary and in the interior. Due to the time-dependent nature of creep, the solution has to be derived over increments of time. Automatic time incrementation technique and backward Euler method for updating the variables are implemented to assure stability and accuracy of results. A flowchart of the solution strategy is also presented. (Author)

Purpose: To provide a remote-controlled replacing method for core constituting elements in a liquid-metal cooling fast breeder, wherein particularly, the core constituting elements are prevented from being loaded on the core position other than as designated. Constitution: The method comprises a first step which determines a position of a suitable neutron shielding body in order to measure a reference level of complete insertion of the core constituting elements, a second step which inserts a gripper for a fuel exchanger, a third step which decides stroke dimensions of the complete insertion, and a fourth step which discriminates the core constituting elements to begin handling of fuel rods. The method further comprises a fifth step which determines a loading position of fuel rod, and a sixth step which inserts and loads fuel rods into the core. The method still further comprises a seventh step which compares and judges the dimension of loading stroke and the dimension of complete inserting stroke so that when coincided, loading is completed, and when not coincided, loading is not completed and then the cycle of the fourth step is repeated. (Kawakami, Y.)

regularization results, make possible to imagine a finite element resolution method.In a first time, the Mumford-Shah functional is introduced and some existing results are quoted. Then, a discrete formulation for the Mumford-Shah problem is proposed and its $\\Gamma$-convergence is proved. Finally, some...

This book explores finite elementmethods for incompressible flow problems: Stokes equations, stationary Navier-Stokes equations, and time-dependent Navier-Stokes equations. It focuses on numerical analysis, but also discusses the practical use of these methods and includes numerical illustrations. It also provides a comprehensive overview of analytical results for turbulence models. The proofs are presented step by step, allowing readers to more easily understand the analytical techniques.

Full Text Available Crack propagation simulation began with the development of the finite elementmethod; the analyses were conducted to obtain a basic understanding of the crack growth. Today structural and materials engineers develop structures and materials properties using this technique. The aim of this paper is to verify the effect of different crack propagation rates in determination of crack opening and closing stress of an ASTM specimen under a standard suspension spectrum loading from FDandE SAE Keyhole Specimen Test Load Histories by finite element analysis. To understand the crack propagation processes under variable amplitude loading, retardation effects are observed

Polygonal finite elementmethod (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite elementmethod is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite elementmethod. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

A method to determine the sticking coefficient of precursor molecules used in atomic layer deposition (ALD) will be introduced. The sticking coefficient is an interesting quantity for comparing different ALD processes and reactors but it cannot be observed easily. The method relies on free molecular flow in nanoscale cylindrical holes. The sticking coefficient is determined for tetrakis(dimethylamino)titanium in combination with ozone. The proposed method can be applied independent of the type of reactor, precursor delivery system and precursors.

A mixed element approach using right hexahedral elements and right prism elements for the finite element-boundary integral method is presented and discussed for the study of planar cavity-backed antennas...

Soil aeration is an important factor for the biological activity in the soil and soil respiration. Generally, gas exchange between soil and atmosphere is assumed to be governed by diffusion and Fick's Law is used to describe the fluxes in the soil. The "apparent soil gas diffusion coefficient" represents the proportional factor between the flux and the gas concentration gradient in the soil and reflects the ability of the soil to "transport passively" gases through the soil. One common way to determine this coefficient is to take core samples in the field and determine it in the lab. Unfortunately this method is destructive and needs laborious field work and can only reflect a small fraction of the whole soil. As a consequence insecurity about the resulting effective diffusivity on the profile scale must remain. We developed a new in-situ method using new gas sampling device, tracer gas and inverse soil gas modelling. The gas sampling device contains several sampling depths and can be easily installed into vertical holes of an auger, which allows for fast installation of the system. At the lower end of the device inert tracer gas is injected continuously. The tracer gas diffuses into the surrounding soil. The resulting distribution of the tracer gas concentrations is used to deduce the diffusivity profile of the soil. For Finite Element Modeling of the gas sampling device/soil system the program COMSOL is used. We will present the results of a field campaign comparing the new in-situ method with lab measurements on soil cores. The new sampling pole has several interesting advantages: it can be used in-situ and over a long time; so it allows following modifications of diffusion coefficients in interaction with rain but also vegetation cycle and wind.

This book serves as a text for one- or two-semester courses for upper-level undergraduates and beginning graduate students and as a professional reference for people who want to solve partial differential equations (PDEs) using finite elementmethods. The author has attempted to introduce every concept in the simplest possible setting and maintain a level of treatment that is as rigorous as possible without being unnecessarily abstract. Quite a lot of attention is given to discontinuous finite elements, characteristic finite elements, and to the applications in fluid and solid mechanics including applications to porous media flow, and applications to semiconductor modeling. An extensive set of exercises and references in each chapter are provided.

Finite ElementMethod in Machining Processes provides a concise study on the way the Finite ElementMethod (FEM) is used in the case of manufacturing processes, primarily in machining. The basics of this kind of modeling are detailed to create a reference that will provide guidelines for those who start to study this method now, but also for scientists already involved in FEM and want to expand their research. A discussion on FEM, formulations and techniques currently in use is followed up by machining case studies. Orthogonal cutting, oblique cutting, 3D simulations for turning and milling, grinding, and state-of-the-art topics such as high speed machining and micromachining are explained with relevant examples. This is all supported by a literature review and a reference list for further study. As FEM is a key method for researchers in the manufacturing and especially in the machining sector, Finite ElementMethod in Machining Processes is a key reference for students studying manufacturing processes but al...

The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

A method to evaluate chemical element concentrations in samples by generating an effective polychromatic beam using as initial input real monochromatic beam data is presented. There is a great diversity of research being conducted at synchrotron facilities around the world and a diverse set of beamlines to accommodate this research. Time is a precious commodity at synchrotron facilities; therefore, methods that can maximize the time spent collecting data are of value. At the same time the incident radiation spectrum, necessary for some research, may not be known on a given beamline. A preliminary presentation of a method applicable to X-ray fluorescence spectrocopic analyses that overcomes the lack of information about the incident beam spectrum that addresses both of these concerns is given here. The method is equally applicable for other X-ray sources so long as local conditions are considered. It relies on replacing the polychromatic spectrum in a standard fundamental parameters analysis with a set of effective monochromatic photon beams. A beam is associated with each element and can be described by an analytical function allowing extension to elements not included in the necessary calibration measurement(s)

The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)

The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)

Crack propagation simulation began with the development of the finite elementmethod; the analyses were conducted to obtain a basic understanding of the crack growth. Today structural and materials engineers develop structures and materials properties using this technique. The aim of this paper is to verify the effect of different crack propagation rates in determination of crack opening and closing stress of an ASTM specimen under a standard suspension spectrum loading from FD&E SAE Keyh...

This paper propose online identification method of regional frequency deviation coefficient based on the analysis of interconnected grid AGC adjustment response mechanism of regional frequency deviation coefficient and the generator online real-time operation state by measured data through PMU, analyze the optimization method of regional frequency deviation coefficient in case of the actual operation state of the power system and achieve a more accurate and efficient automatic generation control in power system. Verify the validity of the online identification method of regional frequency deviation coefficient by establishing the long-term frequency control simulation model of two-regional interconnected power system.

A new method for global reactor core calculations is described. This method is based on a unique formulation of the response matrix method, implemented with a higher order finite elementmethod. The unique aspects of this approach are twofold. First, there are two levels to the overall calculational scheme: the local or assembly level and the global or core level. Second, the response matrix scheme, which is formulated at both levels, consists of two separate response matrices rather than one response matrix as is generally the case. These separate response matrices are seen to be quite beneficial for the criticality eigenvalue calculation, because they are independent of k /SUB eff/. The response matrices are generated from a Galerkin finite element solution to the weak form of the diffusion equation, subject to an arbitrary incoming current and an arbitrary distributed source. Calculational results are reported for two test problems, the two-dimensional International Atomic Energy Agency benchmark problem and a two-dimensional pressurized water reactor test problem (Biblis reactor), and they compare well with standard coarse mesh methods with respect to accuracy and efficiency. Moreover, the accuracy (and capability) is comparable to fine mesh for a fraction of the computational cost. Extension of the method to treat heterogeneous assemblies and spatial depletion effects is discussed

The Finite ElementMethod in One Dimension. Further Applications in One Dimension. High-Order and Spectral Elements in One Dimension. The Finite ElementMethod in Two Dimensions. Quadratic and Spectral Elements in Two Dimensions. Applications in Mechanics. Viscous Flow. Finite and Spectral ElementMethods in Three Dimensions. Appendices. References. Index.

In the last couple of decades the Boundary ElementMethod (BEM) has become a well-established technique that is widely used for solving various problems in electrical engineering and electromagnetics. Although there are many excellent research papers published in the relevant literature that describe various BEM applications in electrical engineering and electromagnetics, there has been a lack of suitable textbooks and monographs on the subject. This book presents BEM in a simple fashion in order to help the beginner to understand the very basic principles of the method. It initially derives B

Full Text Available We present an accurate fast method for the computation of potential internal axisymmetric flow based on the boundary element technique. We prove that the computed velocity field asymptotically satisfies reasonable boundary conditions at infinity for various types of inlet/exit. Computation of internal axisymmetric potential flow is an essential ingredient in the three-dimensional problem of computation of velocity fields in turbomachines. We include the results of a practical application of the method to the computation of flow in turbomachines of Kaplan and Francis types.

The current chapter presents the blade element momentum (BEM) method. The BEM method for a steady uniform inflow is presented in a first section. Some of the ad-hoc corrections that are usually added to the algorithm are discussed in a second section. An exception is made to the tip-loss correction...... which is introduced early in the algorithm formulation for practical reasons. The ad-hoc corrections presented are: the tip-loss correction, the high-thrust correction (momentum breakdown) and the correction for wake rotation. The formulation of an unsteady BEM code is given in a third section...

A method and instrument is provided which allows quick and accurate measurement of the coefficient of performance of an installed electrically powered heat pump including auxiliary resistane heaters. Temperature-sensitive resistors are placed in the return and supply air ducts to measure the temperature increase of the air across the refrigerant and resistive-heating elements of the system. The voltages across the resistors which are directly proportional to the respective duct tempertures are applied to the inputs of a differential amplifier so that its output voltage is proportional to the temperature difference across the unit. A voltage-to-frequency converter connected to the output of the differential amplifier converts the voltage signal to a proportional-frequency signal. A digital watt meter is used to measure the power to the unit and produces a signal having a frequency proportional to the input power. A digital logic circuit ratios the temperature difference signal and the electric power input signal in a unique manner to produce a single number which is the coefficient of performance of the unit over the test interval. The digital logic and an in-situ calibration procedure enables the instrument to make these measurements in such a way that the ratio of heat flow/power input is obtained without computations. No specialized knowledge of thermodynamics or electrons is required to operate the instrument.

Full Text Available The rational use of composites as structural materials, while perceiving the thermal and mechanical loads, to a large extent determined by their thermoelastic properties. From the presented review of works devoted to the analysis of thermoelastic characteristics of composites, it follows that the problem of estimating these characteristics is important. Among the thermoelastic properties of composites occupies an important place its temperature coefficient of linear expansion.Along with fiber composites are widely used in the technique of dispersion hardening composites, in which the role of inclusions carry particles of high-strength and high-modulus materials, including nanostructured elements. Typically, the dispersed particles have similar dimensions in all directions, which allows the shape of the particles in the first approximation the ball.In an article for the composite with isotropic spherical inclusions of a plurality of different materials by the self-produced design formulas relating the temperature coefficient of linear expansion with volume concentration of inclusions and their thermoelastic characteristics, as well as the thermoelastic properties of the matrix of the composite. Feature of the method is the self-accountability thermomechanical interaction of a single inclusion or matrix particles with a homogeneous isotropic medium having the desired temperature coefficient of linear expansion. Averaging over the volume of the composite arising from such interaction perturbation strain and stress in the inclusions and the matrix particles and makes it possible to obtain such calculation formulas.For the validation of the results of calculations of the temperature coefficient of linear expansion of the composite of this type used two-sided estimates that are based on the dual variational formulation of linear thermoelasticity problem in an inhomogeneous solid containing two alternative functional (such as Lagrange and Castigliano

One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

A variational treatment of the finite elementmethod for neutron transport is given based on a version of the even-parity Boltzmann equation which does not assume that the differential scattering cross-section has a spherical harmonic expansion. The theory of minimum and maximum principles is based on the Cauchy-Schwartz equality and the properties of a leakage operator G and a removal operator C. For systems with extraneous sources, two maximum and one minimum principles are given in boundary free form, to ease finite element computations. The global error of an approximate variational solution is given, the relationship of one the maximum principles to the method of least squares is shown, and the way in which approximate solutions converge locally to the exact solution is established. A method for constructing local error bounds is given, based on the connection between the variational method and the method of the hypercircle. The source iteration technique and a maximum principle for a system with extraneous sources suggests a functional for a variational principle for a self-sustaining system. The principle gives, as a consequence of the properties of G and C, an upper bound to the lowest eigenvalue. A related functional can be used to determine both upper and lower bounds for the lowest eigenvalue from an inspection of any approximate solution for the lowest eigenfunction. The basis for the finite element is presented in a general form so that two modes of exploitation can be undertaken readily. The model can be in phase space, with positional and directional co-ordinates defining points of the model, or it can be restricted to the positional co-ordinates and an expansion in orthogonal functions used for the directional co-ordinates. Suitable sets of functions are spherical harmonics and Walsh functions. The latter set is appropriate if a discrete direction representation of the angular flux is required. (author)

A nuclear fuel element assembling method and apparatus is preferably operable under programmed control unit to receive fuel rods from storage, arrange them into axially aligned stacks of closely monitored length, and transfer the stacks of fuel rods to a loading device for insertion into longitudinal passages in the fuel elements. In order to handle large numbers of one or more classifications of fuel rods or other cylindrical parts, the assembling apparatus includes at least two feed troughs each formed by a pair of screw members with a movable table having a plurality of stacking troughs for alignment with the feed troughs and with a conveyor for delivering the stacks to the loading device, the fuel rods being moved along the stacking troughs upon a fluid cushion. 23 claims, 6 figures

A method is described for detecting a fuel element failure in a liquid-sodium-cooled fast breeder reactor consisting of equilibrating a sample of the coolant with a molten salt consisting of a mixture of barium iodide and strontium iodide (or other iodides) whereby a large fraction of any radioactive iodine present in the liquid sodium coolant exchanges with the iodine present in the salt; separating the molten salt and sodium; if necessary, equilibrating the molten salt with nonradioactive sodium and separating the molten salt and sodium; and monitoring the molten salt for the presence of iodine, the presence of iodine indicating that the cladding of a fuel element has failed. (U.S.)

In this paper, we propose oversampling strategies in the generalized multiscale finite elementmethod (GMsFEM) framework. The GMsFEM, which has been recently introduced in Efendiev et al. (2013b) [Generalized Multiscale Finite ElementMethods, J. Comput. Phys., vol. 251, pp. 116-135, 2013], allows solving multiscale parameter-dependent problems at a reduced computational cost by constructing a reduced-order representation of the solution on a coarse grid. The main idea of the method consists of (1) the construction of snapshot space, (2) the construction of the offline space, and (3) construction of the online space (the latter for parameter-dependent problems). In Efendiev et al. (2013b) [Generalized Multiscale Finite ElementMethods, J. Comput. Phys., vol. 251, pp. 116-135, 2013], it was shown that the GMsFEM provides a flexible tool to solve multiscale problems with a complex input space by generating appropriate snapshot, offline, and online spaces. In this paper, we develop oversampling techniques to be used in this context (see Hou and Wu (1997) where oversampling is introduced for multiscale finite elementmethods). It is known (see Hou and Wu (1997)) that the oversampling can improve the accuracy of multiscale methods. In particular, the oversampling technique uses larger regions (larger than the target coarse block) in constructing local basis functions. Our motivation stems from the analysis presented in this paper, which shows that when using oversampling techniques in the construction of the snapshot space and offline space, GMsFEM will converge independent of small scales and high contrast under certain assumptions. We consider the use of a multiple eigenvalue problems to improve the convergence and discuss their relation to single spectral problems that use oversampled regions. The oversampling procedures proposed in this paper differ from those in Hou and Wu (1997). In particular, the oversampling domains are partially used in constructing local

The study of mathematical models applied to wind turbine design in recent years, principally in electrical energy generation, has become significant due to the increasing use of renewable energy sources with low environmental impact. Thus, this paper shows an alternative mathematical scheme for the wind turbine design, based on the Blade Element Momentum (BEM) Theory. The results from the BEM method are greatly dependent on the precision of the lift and drag coefficients. The basic of BEM method assumes the blade can be analyzed as a number of independent element in spanwise direction. The induced velocity at each element is determined by performing the momentum balance for a control volume containing the blade element. The aerodynamic forces on the element are calculated using the lift and drag coefficient from the empirical two-dimensional wind tunnel test data at the geometric angle of attack (AOA) of the blade element relative to the local flow velocity.

The conventional neoclassical moment method in the banana regime is improved by increasing the accuracy of approximation to the linearized Fokker-Planck collision operator. This improved method is formulated for a multiple ion plasma in general tokamak equilibria. The explicit computation in a model magnetic field shows that the neoclassical transport coefficients can be accurately calculated in the full range of aspect ratio by the improved method. The some neoclassical transport coefficients for the intermediate aspect ratio are found to appreciably deviate from those obtained by the conventional moment method. The differences between the transport coefficients with these two methods are up to about 20%.

There are many interesting methods can be utilized to construct special solutions of nonlinear differential equations with constant coefficients. However, most of these methods are not applicable to nonlinear differential equations with variable coefficients. A new method is presented in this Letter, which can be used to find special solutions of nonlinear differential equations with variable coefficients. This method is based on seeking appropriate Bernoulli equation corresponding to the equation studied. Many well-known equations are chosen to illustrate the application of this method

The ferritic action of tin for a 18-10 stainless steel has been measured by two different methods: the first is based on the diffusion couple method and the graphical representation of compositions in a diagram α/α + γ/γ corresponding to ferrite and austenitic elements of the steel. In the second method, ferrite formation is analyzed in small ingots prepared with different chromium and tin concentrations. Ferrite coefficient of tin, compared to chromium is 0.25 with diffusion couples and this value is in good agreement with the classical method [fr

A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating these coefficients, which are the differential and the generalized perturbation theory methods. The proposed method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivates of the integral parameter, φ(ξ), with respect to σ are calculated using the perturbation method and the functional derivates of this generic integral parameter with respect to σ and φ are calculated using the differential method. The new method merges the advantages of the differential and generalized perturbation theory methods and eliminates their disadvantages. (author)

This work deals with the CERMET fuels, chosen for their good behaviour under irradiation and their high thermal conductivity. The kinetic coefficients have been particularly studied. Comparisons have been made with other solutions using other composite fuels in particular the solid solutions and the ROX solution. The core control requiring an heterogeneous assembly, we propose an assembly whose characteristics are compared with those of the APA reference. (O.M.)

The application of the boundary elementmethod in numerical analysis is based upon the use of boundary integral operators stemming from multiple layer potentials. The regularity properties of these operators are vital in the development of boundary integral equations and error estimates. We show...

The Finite ElementMethod (FEM) is a powerful numerical tool that is being used in a large number of engineering applications. The FEM is constructed on triangular/tetrahedral and quadrilateral/hexahedral meshes. Extending the FEM to general polygonal/polyhedral meshes in straightforward way turns out to be extremely difficult and leads to very complex and computationally expensive schemes. The reason for this failure is that the construction of the basis functions on elements with a very general shape is a non-trivial and complex task. In this project we developed a new family of numerical methods, dubbed the Virtual ElementMethod (VEM) for the numerical approximation of partial differential equations (PDE) of elliptic type suitable to polygonal and polyhedral unstructured meshes. We successfully formulated, implemented and tested these methods and studied both theoretically and numerically their stability, robustness and accuracy for diffusion problems, convection-reaction-diffusion problems, the Stokes equations and the biharmonic equations.

Friction and diffusion coefficients can be derived simply by combining statistical arguments with the Feynman path integral method. A transport equation for Feynman's influence functional is obtained, and transport coefficients are deduced from it. The expressions are discussed in the limits of weak, and of strong coupling. (Auth.)

In this paper, we consider second order neutrons diffusion problem with coefficients in L ∞ (Ω). Nodal method of the lowest order is applied to approximate the problem's solution. The approximation uses special basis functions in which the coefficients appear. The rate of convergence obtained is O(h 2 ) in L 2 (Ω), with a free rectangular triangulation. (authors)

A linear diffusion model serves as the basis for determination of an effective radon diffusion coefficient in concrete. The coefficient was needed to later allow quantitative prediction of radon accumulation within and behind concrete walls after application of an impervious radon barrier. A resolution of certain discrepancies noted in the literature in the use of an effective diffusion coefficient to model diffusion of a radioactive gas through a porous medium is suggested. An outline of factors expected to affect the concrete physical structure and the effective diffusion coefficient of radon through it is also presented. Finally, a field method for evaluating effective radon diffusion coefficients in concrete is proposed and results of measurements performed on a concrete foundation wall are compared with similar published values of gas diffusion coefficients in concrete. (author)

A new method based on noise measurement was used to estimate the temperature reactivity coefficient of the PAKS-2 reactor during the entire fuel cycle. Based on the measurements it is possible to measure the dependence of reactivity coefficient on boron concentration. Good agreement was found between the results obtained by the new method and by the conventional ones. Based on this method a new equipment can be develop which assures permanent measurements during operation. (author)

The linear attenuation coefficient values of regular and irregular shaped flyash materials have been measured without knowing the thickness of a sample using a new technique namely 'two media method'. These values have also been measured with a standard gamma ray transmission method and obtained theoretically with winXCOM computer code. From the comparison it is reported that the two media method has given accurate results of attenuation coefficients of flyash materials

A simple method to calculate the homogenized diffusion coefficient for a lattice cell using Monte-Carlo techniques is demonstrated. The method relies on modelling a finite reactor volume to induce a curvature in the flux distribution, and then follows a large number of histories to obtain sufficient statistics for a meaningful result. The goal is to determine the diffusion coefficient with sufficient accuracy to test approximate methods built into deterministic lattice codes. Numerical results are given. (author). 4 refs., 8 figs

In intermediate and college algebra courses there are a number of methods for factoring quadratic trinomials with integer coefficients over the integers. Some of these methods have been given names, such as trial and error, reversing FOIL, AC method, middle term splitting method and slip and slide method. The purpose of this article is to discuss…

The Split Coefficient Matrix (SCM) finite difference method for solving hyperbolic systems of equations is presented. This new method is based on the mathematical theory of characteristics. The development of the method from characteristic theory is presented. Boundary point calculation procedures consistent with the SCM method used at interior points are explained. The split coefficient matrices that define the method for steady supersonic and unsteady inviscid flows are given for several examples. The SCM method is used to compute several flow fields to demonstrate its accuracy and versatility. The similarities and differences between the SCM method and the lambda-scheme are discussed.

Full Text Available This paper presents the residual stress behaviour under various values of friction coefficients and scratching displacement amplitudes. The investigation is based on numerical solution using explicit finite elementmethod in quasi-static condition. Two different aeroengine materials, i.e. Super CMV (Cr-Mo-V and Titanium alloys (Ti-6Al-4V, are examined. The usage of FEM analysis in plate under normal contact is validated with Hertzian theoretical solution in terms of contact pressure distributions. The residual stress distributions along with normal and shear stresses on elastic and plastic regimes of the materials are studied for a simple cylinder-on-flat contact configuration model subjected to normal loading, scratching and followed by unloading. The investigated friction coefficients are 0.3, 0.6 and 0.9, while scratching displacement amplitudes are 0.05 mm, 0.10 mm and 0.20 mm respectively. It is found that friction coefficient of 0.6 results in higher residual stress for both materials. Meanwhile, the predicted residual stress is proportional to the scratching displacement amplitude, higher displacement amplitude, resulting in higher residual stress. It is found that less residual stress is predicted on Super CMV material compared to Ti-6Al-4V material because of its high yield stress and ultimate strength. Super CMV material with friction coefficient of 0.3 and scratching displacement amplitude of 0.10 mm is recommended to be used in contact engineering applications due to its minimum possibility of fatigue.

In several situations of nuclear applications, the knowledge of gamma-ray linear attenuation coefficient for irregular samples is necessary, such as in soil physics and geology. This work presents the validation of a methodology for the determination of the linear attenuation coefficient (μ) of irregular shape samples, in such a way that it is not necessary to know the thickness of the considered sample. With this methodology irregular soil samples (undeformed field samples) from Londrina region, north of Parana were studied. It was employed the two media method for the μ determination. It consists of the μ determination through the measurement of a gamma-ray beam attenuation by the sample sequentially immersed in two different media, with known and appropriately chosen attenuation coefficients. For comparison, the theoretical value of μ was calculated by the product of the mass attenuation coefficient, obtained by the WinXcom code, and the measured value of the density sample. This software employs the chemical composition of the samples and supplies a table of the mass attenuation coefficients versus the photon energy. To verify the validity of the two media method, compared with the simple gamma ray transmission method, regular pome stone samples were used. With these results for the attenuation coefficients and their respective deviations, it was possible to compare the two methods. In this way we concluded that the two media method is a good tool for the determination of the linear attenuation coefficient of irregular materials, particularly in the study of soils samples. (author)

In this study the elemental composition of biota, water and sediment from a shallow bay in the Forsmark region have been determined. The report presents data for 48 different elements (Al, As, Ba, Br, C, Ca, Cd, Ce, Cl, Co, Cr, Cs, Cu, Dy, Er, Eu, F, Fe, Gd, Hg, Ho, I, K, Li, Lu, Mg, Mn, N, Na, Nd, Ni, P, Pb, Pr, Ra, Rb, S, Se, Si, Sm, Tb, Th, Ti, Tm, V, Yb, Zn, Zr) in all major functional groups of the coastal ecosystem (phytoplankton, zooplankton, benthic microalgae, macroalgae, macrophytes, benthic herbivores, benthic filter feeders, benthic detrivores, planktivorous fish, benthic omnivorous fish, carnivorous fish, dissolved and particulate matter in the water and the sediment) during spring 2005. The overall aim of the study is to contribute to a better understanding of ecological properties and processes that govern uptake and transfer of trace elements, heavy-metals, radionuclides and other non-essential elements/contaminants in coastal environments of the Baltic Sea. In addition, the data was collected to provide site-specific Bioconcentration Factors (BCF), Biomagnification Factors (BMF), partitioning coefficients (K d ) and element ratios (relative to carbon) for use in ongoing SKB safety assessments. All these values, as well as the element concentration data from which they are derived, are presented here. As such, this is mainly a data report, although initial interpretations of the data also are presented and discussed. Reported data include element concentrations, CNP-stoichiometry, and multivariate data analysis. Elemental concentrations varied greatly between organisms and environmental components, depending on the function of the elements, and the habitat, ecosystem function, trophic level and morphology (taxonomy) of the organisms. The results show for instance that food intake and metabolism strongly influence the elemental composition of organisms. The three macrophytes had quite similar elemental composition (despite their taxonomic differences

In this study the elemental composition of biota, water and sediment from a shallow bay in the Forsmark region have been determined. The report presents data for 48 different elements (Al, As, Ba, Br, C, Ca, Cd, Ce, Cl, Co, Cr, Cs, Cu, Dy, Er, Eu, F, Fe, Gd, Hg, Ho, I, K, Li, Lu, Mg, Mn, N, Na, Nd, Ni, P, Pb, Pr, Ra, Rb, S, Se, Si, Sm, Tb, Th, Ti, Tm, V, Yb, Zn, Zr) in all major functional groups of the coastal ecosystem (phytoplankton, zooplankton, benthic microalgae, macroalgae, macrophytes, benthic herbivores, benthic filter feeders, benthic detrivores, planktivorous fish, benthic omnivorous fish, carnivorous fish, dissolved and particulate matter in the water and the sediment) during spring 2005. The overall aim of the study is to contribute to a better understanding of ecological properties and processes that govern uptake and transfer of trace elements, heavy-metals, radionuclides and other non-essential elements/contaminants in coastal environments of the Baltic Sea. In addition, the data was collected to provide site-specific Bioconcentration Factors (BCF), Biomagnification Factors (BMF), partitioning coefficients (K{sub d}) and element ratios (relative to carbon) for use in ongoing SKB safety assessments. All these values, as well as the element concentration data from which they are derived, are presented here. As such, this is mainly a data report, although initial interpretations of the data also are presented and discussed. Reported data include element concentrations, CNP-stoichiometry, and multivariate data analysis. Elemental concentrations varied greatly between organisms and environmental components, depending on the function of the elements, and the habitat, ecosystem function, trophic level and morphology (taxonomy) of the organisms. The results show for instance that food intake and metabolism strongly influence the elemental composition of organisms. The three macrophytes had quite similar elemental composition (despite their taxonomic

The diffusion data and corresponding detailed insights are particularly important for the understanding of the related kinetic processes in Fe based alloys, e.g. solute strengthening, phase transition, solution treatment etc. We present a density function theory study of the diffusivity of self and solutes (La, Ce, Y and Nb) in fcc Fe. The five-frequency model was employed to calculate the microscopic parameters in the correlation factors of the solute diffusion. The interactions of the solutes with the first nearest-neighbor vacancy (1nn) are all attractive, and can be well understood on the basis of the combination of the strain-relief effects and the electronic effects. It is found that among the investigated species, Ce is the fastest diffusing solute in fcc Fe matrix followed by Nb, and the diffusion coefficients of these two solutes are about an order of magnitude higher than that of Fe self-diffusion. And the results show that the diffusion coefficient of La is slightly higher than that of Y, and both species are comparable to that of Fe self-diffusion.

A new original formulation of the discrete elementmethod based on the soft contact approach is presented in this work. The standard DEM has heen enhanced by the introduction of the additional (global) deformation mode caused by the stresses in the particles induced by the contact forces. Uniform stresses and strains are assumed for each particle. The stresses are calculated from the contact forces. The strains are obtained using an inverse constitutive relationship. The strains allow us to obtain deformed particle shapes. The deformed shapes (ellipses) are taken into account in contact detection and evaluation of the contact forces. A simple example of a uniaxial compression of a rectangular specimen, discreti.zed with equal sized particles is simulated to verify the DDEM algorithm. The numerical example shows that a particle deformation changes the particle interaction and the distribution of forces in the discrete element assembly. A quantitative study of micro-macro elastic properties proves the enhanced capabilities of the DDEM as compared to standard DEM.

Purpose: To effectively prevent the bending of nuclear fuel elements in the reactor by grinding the end faces of pellets due to their mutual sliding. Method: In the manufacturing process of nuclear fuel elements, a plurality of pellets whose sides have been polished are fed one by one by way of a feeding mechanism through the central aperture in an electric motor into movable arms and retained horizontally with the central axis by being held on the side. Then, the pellet held by one of the arms is urged to another pellet held by the other of the arms by way of a pressing mechanism and the mating end faces of both of the pellets are polished by mutual sliding. Thereafter, the grinding dusts resulted are eliminated by drawing pressurized air and then the pellets are enforced into a cladding tube. Thus, the pellets are charged into the cladding tube with both polished end faces being contacted to each other, whereby the axial force is uniformly transmitted within the end faces to prevent the bending of the cladding tube. (Kawakami, Y.)

A recent method for the determination of Clebsch-Gordan coefficients of finite magnetic groups is generalised to magnetic groups. Discussion is restricted to unitary-anti-unitary representations of type I.

Full Text Available The process fish salting has been studied by the method of photon correlation spectroscopy; the distribution of salt concentration in the solution and herring flesh with skin has been found, diffusion coefficients and salt concentrations used for creating a mathematical model of the salting technology have been worked out; the possibility of determination by this method the coefficient of dynamic viscosity of solutions and different media (minced meat etc. has been considered

The modelization of microwave propagation problems, including Eigen-value problem and scattering problem, is accomplished by the finite elementmethod with vector functional and scalar functional. For Eigen-value problem, propagation modes in waveguides and resonant modes in cavities can be calculated in a arbitrarily-shaped structure with inhomogeneous material. Several microwave structures are resolved in order to verify the program. One drawback associated with the vector functional is the appearance of spurious or non-physical solutions. A penalty function method has been introduced to reduce spurious' solutions. The adaptive charge method is originally proposed in this thesis to resolve waveguide scattering problem. This method, similar to VSWR measuring technique, is more efficient to obtain the reflection coefficient than the matrix method. Two waveguide discontinuity structures are calculated by the two methods and their results are compared. The adaptive charge method is also applied to a microwave plasma excitor. It allows us to understand the role of different physical parameters of excitor in the coupling of microwave energy to plasma mode and the mode without plasma. (author) [fr

In this study, the strengths and weaknesses of existing methods for determining the dispersion coefficient in the two-dimensional river mixing model were assessed based on hydraulic and tracer data sets acquired from experiments conducted on either laboratory channels or natural rivers. From the results of this study, it can be concluded that, when the longitudinal dispersion coefficient as well as the transverse dispersion coefficients must be determined in the transient concentration situation, the two-dimensional routing procedures, 2D RP and 2D STRP, can be employed to calculate dispersion coefficients among the observation methods. For the steady concentration situation, the STRP can be applied to calculate the transverse dispersion coefficient. When the tracer data are not available, either theoretical or empirical equations by the estimation method can be used to calculate the dispersion coefficient using the geometric and hydraulic data sets. Application of the theoretical and empirical equations to the laboratory channel showed that equations by Baek and Seo [[3], 2011] predicted reasonable values while equations by Fischer [23] and Boxwall and Guymer (2003) overestimated by factors of ten to one hundred. Among existing empirical equations, those by Jeon et al. [28] and Baek and Seo [6] gave the agreeable values of the transverse dispersion coefficient for most cases of natural rivers. Further, the theoretical equation by Baek and Seo [5] has the potential to be broadly applied to both laboratory and natural channels.

In this Letter, a generalized fractional sub-equation method is proposed for solving fractional differential equations with variable coefficients. Being concise and straightforward, this method is applied to the space–time fractional Gardner equation with variable coefficients. As a result, many exact solutions are obtained including hyperbolic function solutions, trigonometric function solutions and rational solutions. It is shown that the considered method provides a very effective, convenient and powerful mathematical tool for solving many other fractional differential equations in mathematical physics. -- Highlights: ► Study of fractional differential equations with variable coefficients plays a role in applied physical sciences. ► It is shown that the proposed algorithm is effective for solving fractional differential equations with variable coefficients. ► The obtained solutions may give insight into many considerable physical processes.

For the purpose of speed-up of the three-dimensional eXtended Boundary-Node Method (X-BNM), an efficient algorithm for evaluating influence coefficients has been developed. The algorithm can be easily implemented into the X-BNM without using any integration cells. By applying the resulting X-BNM to the Laplace problem, the performance of the algorithm is numerically investigated. The numerical experiments show that, by using the algorithm, computational costs for evaluating influence coefficients in the X-BNM are reduced considerably. Especially for a large-sized problem, the algorithm is efficiently performed, and the computational costs of the X-BNM are close to those of the Boundary-ElementMethod (BEM). In addition, for the problem, the X-BNM shows almost the same accuracy as that of the BEM. (author)

Platinum group elements (PGE) are of special interest for analytical research due to their economic importance like chemical peculiarities as catalysts, medical applications as anticancer drugs, and possible environmental detrimental impact as exhaust from automobile catalyzers. Natural levels of PGE are so low in concentration that most of the current analytical techniques approach their limit of detection capacity. In addition, Ru, Rh, Pd, Re, Os, Ir, and Pt analyses still constitute a challenge in accuracy and precision of quantification in natural matrices. Nuclear analytical techniques, such as neutron activation analysis, X ray fluorescence, or proton-induced X ray emission (PIXE), which are generally considered as reference methods for many analytical problems, are useful as well. However, due to methodological restrictions, they can, in most cases, only be applied after pre-concentration and under special irradiation conditions. This report was prepared following a coordinated research project and a consultants meeting addressing the subject from different viewpoints. The experts involved suggested to discuss the issue according to the (1) application, hence, the concentration levels encountered, and (2) method applied for analysis. Each of the different fields of application needs special consideration for sample preparation, PGE pre-concentration, and determination. Additionally, each analytical method requires special attention regarding the sensitivity and sample type. Quality assurance/quality control aspects are considered towards the end of the report. It is intended to provide the reader of this publication with state-of-the-art information on the various aspects of PGE analysis and to advise which technique might be most suitable for a particular analytical problem related to platinum group elements. In particular, many case studies described in detail from the authors' laboratory experience might help to decide which way to go. As in many cases

We aim to extend the scaled boundary finite elementmethod to construct conforming polygon elements. The development of the polygonal finite element is highly anticipated in computational mechanics as greater flexibility and accuracy can be achieved using these elements. The scaled boundary polygonal finite element will enable new developments in mesh generation, better accuracy from a higher order approximation and better transition elements in finite element meshes. Polygon elements of arbitrary number of edges and order have been developed successfully. The edges of an element are discretised with line elements. The displacement solution of the scaled boundary finite elementmethod is used in the development of shape functions. They are shown to be smooth and continuous within the element, and satisfy compatibility and completeness requirements. Furthermore, eigenvalue decomposition has been used to depict element modes and outcomes indicate the ability of the scaled boundary polygonal element to express rigid body and constant strain modes. Numerical tests are presented; the patch test is passed and constant strain modes verified. Accuracy and convergence of the method are also presented and the performance of the scaled boundary polygonal finite element is verified on Cook's swept panel problem. Results show that the scaled boundary polygonal finite elementmethod outperforms a traditional mesh and accuracy and convergence are achieved from fewer nodes. The proposed method is also shown to be truly flexible, and applies to arbitrary n-gons formed of irregular and non-convex polygons.

Ensuring non-destructive testing of products in industry is an urgent task. Most of the modern methods for determining the diffusion coefficient in porous materials have been developed for bodies of a given configuration and size. This leads to the need for finished products destruction to make experimental samples from them. The purpose of this study is the development of a dynamic method that allows operatively determine the diffusion coefficient in finished products from porous materials without destroying them. The method is designed to investigate the solvents diffusion coefficient in building constructions from materials having a porous structure: brick, concrete and aerated concrete, gypsum, cement, gypsum or silicate solutions, gas silicate blocks, heat insulators, etc. A mathematical model of the method is constructed. The influence of the design and measuring device operating parameters on the method accuracy is studied. The application results of the developed method for structural porous products are presented.

The development of an efficient algorithm for teleseismic wave field modeling is valuable for calculating the gradients of the misfit function (termed misfit gradients) or Fréchet derivatives when the teleseismic waveform is used for adjoint tomography. Here, we introduce an element-by-element parallel spectral-elementmethod (EBE-SEM) for the efficient modeling of teleseismic wave field propagation in a reduced geology model. Under the plane-wave assumption, the frequency-wavenumber (FK) technique is implemented to compute the boundary wave field used to construct the boundary condition of the teleseismic wave incidence. To reduce the memory required for the storage of the boundary wave field for the incidence boundary condition, a strategy is introduced to efficiently store the boundary wave field on the model boundary. The perfectly matched layers absorbing boundary condition (PML ABC) is formulated using the EBE-SEM to absorb the scattered wave field from the model interior. The misfit gradient can easily be constructed in each time step during the calculation of the adjoint wave field. Three synthetic examples demonstrate the validity of the EBE-SEM for use in teleseismic wave field modeling and the misfit gradient calculation.

Presentation of the possible use of finite-element-methods in food processing. Examples from diffusion studies are given.......Presentation of the possible use of finite-element-methods in food processing. Examples from diffusion studies are given....

The distance between fuel elements contained in a pool is measured in a contactless manner even for a narrow distance less than 1 mm. That is, the equipment for measuring the distance between spent fuel elements of a spent fuel assembly in a nuclear reactor comprises a optical fiber scope, a lens, an industrial TV camera and a monitor TV. The top end of the optical fiber scope is inserted between fuel elements to be measured. The state thereof is displayed on the TV screen to measure the distance between the fuel elements. The measured results are compared with a previously formed calibration curve to determine the value between the fuel elements. Then, the distance between the fuel elements can be determined in the pool of a power plant without dismantling the fuel assembly, to investigate the state of the bending and estimate the fuel working life. (I.S.)

This research thesis addresses the field of fluid-wall thermal exchanges in which the notion of exchange coefficient is notably useful to design, size and optimise devices. A first part reports a bibliographical study which gives an overview of solutions envisaged to determine the convection coefficient in permanent regime with the use of flow sensors, as well as in transient regime. Then, the author reports the development of an unsteady method which is based on the analysis of the cooling kinetics of the front face of a convecting wall, after a unique energetic perturbation (an infinitely brief pulse, or a finite duration energy step). This method is applied to the general case (wall with finite thickness) and to the case of a semi-infinite wall which is typical of materials which are weak thermal conductors. This is extended to the case of good thermal conductors by considering a thermally thin wall. After a detailed description of the experimental bench, above-mentioned solutions are applied to insulating and good thermal conducting materials. In order to validate results of an analysis in transient regime, they are compared with measurements performed in permanent regime with a flow-metering technique. The study of the principle of the dissipation-based flow sensor, and its operation are reported. Experimental results are presented for both methods (pulse and flow sensor), and compared in order to highlight the interest of the unsteady method [French] Difficile a mesurer, le coefficient de convection reste cependant une grandeur necessaire au calcul et a l'optimisation de tout systeme thermique. L'amelioration des capteurs thermiques permet aujourd'hui de concevoir une methode optique, utilisable a distance, et non destructive. Nous proposons dans ce but, un procede de mesure en regime transitoire base sur la radiometrie photothermique impulsionnelle. L'analyse du regime de relaxation d'une paroi, apres une brusque elevation de temperature, permet de remonter

The few-group constants including diffusion coefficients are generated from the assembly calculation results. Once the assembly calculation is done, the cross sections (XSs) are spatially homogenized, and a critical spectrum calculation is performed in order to take into account the neutron leakages of the lattice. The diffusion coefficient is also generated through the critical spectrum calculation. Three different methods of the critical spectrum calculation such as B1 method, P1 method, and fundamental mode (FM) calculation method are considered in this paper. The diffusion coefficients can also be affected by transport approximations for the transport XS calculation which is used in the assembly transport lattice calculation in order to account for the anisotropic scattering effects. The outflow transport approximation and the inflow transport approximation are investigated in this paper. The accuracy of the few group data especially the diffusion coefficients has been studied to optimize the combination of the transport correction methods and the critical spectrum calculation methods using the UNIST lattice physics code STREAM. The combination of the inflow transport approximation and the FM method is shown to provide the highest accuracy in the LWR core calculations. The methodologies to calculate the diffusion coefficients have been reviewed, and the performances of them have been investigated with a LWR core problem. The combination of the inflow transport approximation and the fundamental mode critical spectrum calculation shows the smallest errors in terms of assembly power distribution

Finite elementmethod (FEM) has become a very popular technique for the analysis of fluid-film bearings in the last few years. These bearings are extensively used in nuclear industry applications such as in moderator pumps and main coolant pumps. This report gives the methodology for the solution of Reynold's equation using FEM and its implementation in FE software LUBAN developed in house. It also deals with the mathematical basis and algorithm to account for the cavitation phenomena which makes these problems non-linear in nature. The dynamic coefficients of bearings are evaluated by one-step approach using variational principles. These coefficients are useful for the dynamic characterisation of fluid-film bearings. Several problems have been solved using this code including two real life problems, a circumferentially grooved journal bearing for which experimental results are available and the bearing of moderator pump of 500 MWe PHWR, have been solved. The results obtained for sample problems are in good agreement with the published literature. (author). 9 refs., 14 figs., 5 tabs., 2 ills

The temperature reactivity coefficient was estimated on the basis of noise measurements performed in a PWR. The magnitude of the coefficient was evaluated by relating the values of the APSD and CPSD between ex-core neutron detector signals and fuel assembly outlet thermocouple in the low frequency range. Comparison with δρ/δT measurements performed in PWR by standard methods supports the validity of the results. (author)

Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a

Diffusion is a dominant mechanism regulating the transport of released nuclides. The through-diffusion method is typically applied to determine the diffusion coefficients (D). Depending on the design of the experiment, the concentrations in the source term [i.e., inlet reservoir (IR)] or the end term [i.e., outlet reservoir (OR)] can be fixed or vary. The combinations involve four distinct models (i.e., the CC-CC model, CC-VC model, VC-CC model, and the VC-VC model). Studies discussing the VC-CC model are scant. An analytical method considering the decay effect is required to accurately interpret the radioactive nuclide diffusion experiment results. Therefore, we developed a CC-CC model and a CC-VC model with a decay effect and the simplified formulas of these two models to determine the diffusion coefficient (i.e., the CC-CC method and CC-VC method). We also proposed two simplified methods using the VC-VC model to determine the diffusion coefficient straightforwardly based upon the concentration variation in IR and OR. More importantly, the best advantage of proposed method over others is that one can derive three diffusion coefficients based on one run of experiment. In addition, applying our CC-VC method to those data reported from Radiochemica Acta 96:111-117, 2008; and J Contam Hydrol 35:55-65, 1998, derived comparable diffusion coefficient lying in the identical order of magnitude. Furthermore, we proposed a formula to determine the conceptual critical time (Tc), which is particularly beneficial for the selection of using CC-VC or VC-VC method. Based on our proposed method, it becomes possible to calculate diffusion coefficient from a through-diffusion experiment in a shorter period of time. (author)

The main target of this study is to introduce a new method for calculating the coefficients of sensibility through the union of differential method and generalized perturbation theory, which are the two methods generally used in reactor physics to obtain such variables. These two methods, separated, have some issues turning the sensibility coefficients calculation slower or computationally exhaustive. However, putting them together, it is possible to repair these issues and build a new equation for the coefficient of sensibility. The method introduced in this study was applied in a PWR reactor, where it was performed the sensibility analysis for the production and 239 Pu conversion rate during 120 days (1 cycle) of burnup. The computational code used for both burnup and sensibility analysis, the CINEW, was developed in this study and all the results were compared with codes widely used in reactor physics, such as CINDER and SERPENT. The new mathematical method for calculating the sensibility coefficients and the code CINEW provide good numerical agility and also good efficiency and security, once the new method, when compared with traditional ones, provide satisfactory results, even when the other methods use different mathematical approaches. The burnup analysis, performed using the code CINEW, was compared with the code CINDER, showing an acceptable variation, though CINDER presents some computational issues due to the period it was built. The originality of this study is the application of such method in problems involving temporal dependence and, not least, the elaboration of the first national code for burnup and sensitivity analysis. (author)

Full Text Available This paper deals with the moisture diffusion coefficient of Dahurian Larch (Larix gmelinii Rupr. by use of the Finite Difference Method (FDM. To obtain moisture distributions the dimensional boards of Dahurian Larch were dried, from which test samples were cut and sliced evenly into 9 pieces in different drying periods, so that moisture distributions at different locations and times across the thickness of Dahurian Larch were obtained with a weighing method. With these experimental data, FDM was used to solve Fick’s one-dimensional unsteady-state diffusion equation, and the moisture diffusion coefficient across the thickness at specified time was obtained. Results indicated that the moisture diffusion coefficient decreased from the surface to the center of the Dahurian Larch wood, and it decreased with decreasing moisture content at constant wood temperature; as the wood temperature increased, the moisture diffusion coefficient increased, and the effect of the wood temperature on the moisture diffusion coefficient was more significant than that of moisture content. Moisture diffusion coefficients were different for the two experiments due to differing diffusivity of the specimens.

A simple, non-invasive method is presented which permits determination of the rBCF and, in addition, of the distribution coefficient of the grey matter. The latter, which is closely correlated with the cerebral metabolism, has only been determined in vitro so far. The new method will be a means to check its accuracy. (orig.) [de

A technique for disassembling a nuclear reactor fuel element without destroying the individual fuel pins and other structural components from which the element is assembled is described. A traveling bridge and trolley span a water-filled spent fuel storage pool and support a strongback. The strongback is under water and provides a working surface on which the spent fuel element is placed for inspection and for the manipulation that is associated with disassembly and assembly. To remove, in a non-destructive manner, the grids that hold the fuel pins in the proper relative positions within the element, bars are inserted through apertures in the grids with the aid of special tools. These bars are rotated to flex the adjacent grid walls and, in this way relax the physical engagement between protruding portions of the grid walls and the associated fuel pins. With the grid structure so flexed to relax the physical grip on the individual fuel pins, these pins can be withdrawn for inspection or replacement as necessary without imposing a need to destroy fuel element components

The current pulse E sub oc relaxation method and its application to the determination of diffusion coefficients in electrochemically synthesized polypyrrole thin films is described. Diffusion coefficients for such films in Et4NBF4 and MeCN are determined for a series of submicron film thicknesses. Measurement of the double-layer capacitance, C sub dl, and the resistance, R sub u, of polypyrrole thin films as a function of potential obtained with the galvanostatic pulse method is reported. Measurements of the electrolyte concentration in reduced polypyrrole films are also presented to aid in the interpretation of the data.

Full Text Available Recently an efficient method for the solution of the partial symmetric eigenproblem (DACG, deflated-accelerated conjugate gradient was developed, based on the conjugate gradient (CG minimization of successive Rayleigh quotients over deflated subspaces of decreasing size. In this article four different choices of the coefficient βk required at each DACG iteration for the computation of the new search direction Pk are discussed. The “optimal” choice is the one that yields the same asymptotic convergence rate as the CG scheme applied to the solution of linear systems. Numerical results point out that the optimal βk leads to a very cost effective algorithm in terms of CPU time in all the sample problems presented. Various preconditioners are also analyzed. It is found that DACG using the optimal βk and (LLT−1 as a preconditioner, L being the incomplete Cholesky factor of A, proves a very promising method for the partial eigensolution. It appears to be superior to the Lanczos method in the evaluation of the 40 leftmost eigenpairs of five finite element problems, and particularly for the largest problem, with size equal to 4560, for which the speed gain turns out to fall between 2.5 and 6.0, depending on the eigenpair level.

In this paper, we develop a multiscale mortar multipoint flux mixed finite elementmethod for second order elliptic problems. The equations in the coarse elements (or subdomains) are discretized on a fine grid scale by a multipoint flux mixed finite

Mass attenuation coefficients for 12 selected moderate-to-high atomic-number elements have been obtained from good-geometry measurements made at five 241 Am photon energies of significant emission intensity. Particular interest focuses on measured values for photon energies close to absorption edges. Comparisons with renormalized cross-section predictions indicate agreement to within stated error limits for the majority of cases. Significant discrepancies (> 10%) are noted for Ta at 17.8 and 26.3 keV and W at 59.5 keV. Some support for a discrepancy between measurement and theory for W in the region of 60 keV is found in the reported measurements of others. (author)

Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

Highlights: • A new algorithm is proposed to reduce memory consumption for sensitivity analysis. • The fission matrix method is used to generate adjoint fission source distributions. • Sensitivity analysis is performed on a detailed 3D full-core benchmark with RMC. - Abstract: Recently, there is a need to develop advanced methods of computing eigenvalue sensitivity coefficients to nuclear data in the continuous-energy Monte Carlo codes. One of these methods is the iterated fission probability (IFP) method, which is adopted by most of Monte Carlo codes of having the capabilities of computing sensitivity coefficients, including the Reactor Monte Carlo code RMC. Though it is accurate theoretically, the IFP method faces the challenge of huge memory consumption. Therefore, it may sometimes produce poor sensitivity coefficients since the number of particles in each active cycle is not sufficient enough due to the limitation of computer memory capacity. In this work, two algorithms of the Contribution-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) method, namely, the collision-event-based algorithm (C-CLUTCH) which is also implemented in SCALE and the fission-event-based algorithm (F-CLUTCH) which is put forward in this work, are investigated and implemented in RMC to reduce memory requirements for computing eigenvalue sensitivity coefficients. While the C-CLUTCH algorithm requires to store concerning reaction rates of every collision, the F-CLUTCH algorithm only stores concerning reaction rates of every fission point. In addition, the fission matrix method is put forward to generate the adjoint fission source distribution for the CLUTCH method to compute sensitivity coefficients. These newly proposed approaches implemented in RMC code are verified by a SF96 lattice model and the MIT BEAVRS benchmark problem. The numerical results indicate the accuracy of the F-CLUTCH algorithm is the same as the C

Two methods to measure the diffusion coefficient of a species in a liquid by optical interferometry were compared. The methods were tested on a 1.75 M NaCl aqueous solution diffusing into water at 26 deg. C. Results were D = 1.587 x 10 -9 m 2 s -1 with the first method and D = 1.602 x 10 -9 m 2 s -1 with the second method. Monte Carlo simulation was used to assess the possible dispersion of these results. The standard uncertainties were found to be of the order of 0.05 x 10 -9 m 2 s -1 with both methods. We found that the value of the diffusion coefficient obtained by either method is very sensitive to the magnification of the optical system, and that if diffusion is slow the measurement of time does not need to be very accurate

The required coefficient of friction (RCOF) is an important predictor for slip incidents. Despite the wide use of the RCOF there is no standardised method for identifying the RCOF from ground reaction forces. This article presents a comparison of the outcomes from seven different methods, derived from those reported in the literature, for identifying the RCOF from the same data. While commonly used methods are based on a normal force threshold, percentage of stance phase or time from heel contact, a newly introduced hybrid method is based on a combination of normal force, time and direction of increase in coefficient of friction. Although no major differences were found with these methods in more than half the strikes, significant differences were found in a significant portion of strikes. Potential problems with some of these methods were identified and discussed and they appear to be overcome by the hybrid method. No standard method exists for determining the required coefficient of friction (RCOF), an important predictor for slipping. In this study, RCOF values from a single data set, using various methods from the literature, differed considerably for a significant portion of strikes. A hybrid method may yield improved results.

A study has been made of the use of the neutron wave and pulse propagation method for measurement of thermal neutron diffusion parameters. Earlier works an homogenous and heterogeneous media are reviewed. A new method is sketched for the determination of the diffusion coefficient for samples of limited size. The principle is to place a relatively thin slab of the material between two blocks of a medium with known properties. The advantages and disadvantages of the method are discussed. (author)

We developed a method for analyzing the free vibration of a structure regarded as a distributed system, by combining the Wittrick-Williams algorithm and the transfer dynamic stiffness coefficientmethod. A computational algorithm was formulated for analyzing the free vibration of a straight-line beam regarded as a distributed system, to explain the concept of the developed method. To verify the effectiveness of the developed method, the natural frequencies of straight-line beams were computed using the finite elementmethod, transfer matrix method, transfer dynamic stiffness coefficientmethod, the exact solution, and the developed method. By comparing the computational results of the developed method with those of the other methods, we confirmed that the developed method exhibited superior performance over the other methods in terms of computational accuracy, cost and user convenience.

with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

of obtaining the data becomes quite time consuming thus increasing the cost of design. In this paper, practical methods to define scattering coefficients, which is based on an approach of modeling surface scattering and scattering caused by limited size of surface as well as edge diffraction are presented...

We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…

The variable storage coefficient (VSC) method has been used for streamflow routing in continuous hydrological simulation models such as the Agricultural Policy/Environmental eXtender (APEX) and the Soil and Water Assessment Tool (SWAT) for more than 30 years. APEX operates on a daily time step and ...

Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

A new measurement method for measuring the mean fuel temperature as well as the fuel-to-coolant heat transfer coefficient of fast breeder reactor subassemblies (SA) is reported. The method is based on the individual heat balance of fuel SA's after fast reactor shut-downs and uses only the plants normal SA outlet temperature and neutron power signals. The method was used successfully at the french breeder prototype Super Phenix 1. The mean SA fuel temperature as well as the heat transfer coefficient of all SPX SA's have been determined at power levels between 15 and 90% of nominal power and increasing fuel burn-up from 3 to 83 EFPD (Equivalent of Full Power-Days). The measurements also provided fuel and whole SA time constants. The estimated accuracy of measured fuel parameters is in the order of 10%. Fuel temperatures and SA outlet temperature transients were also calculated with the SPX1 systems code DYN2 for exactly the same fuel and reactor operating parameters as in the experiments. Measured fuel temperatures were higher than calculated ones in all cases. The difference between measured and calculated core mean values increases from 50 K at low power to 180 K at 90% n.p. This is about the double of the experimental error margins. Measured SA heat transfer coefficients are by nearly 20% lower than corresponding heat transfer parameters used in the calculations. Discrepancies found between measured and calculated results also indicate that either the transient heat transfer in the gap between fuel and cladding (gap conductance) might not be exactly reproduced in the computer code or that the gap in the fresh fuel was larger than assumed in the calculations. (orig.) [de

The single-well, ''''push-pull'''' test method is useful for obtaining information on a wide variety of aquifer physical, chemical, and microbiological characteristics. A push-pull test consists of the pulse-type injection of a prepared test solution into a single monitoring well followed by the extraction of the test solution/ground water mixture from the same well. The test solution contains a conservative tracer and one or more reactants selected to investigate a particular process. During the extraction phase, the concentrations of tracer, reactants, and possible reaction products are measured to obtain breakthrough curves for all solutes. This paper presents a simplified method of data analysis that can be used to estimate a first-order reaction rate coefficient from these breakthrough curves. Rate coefficients are obtained by fitting a regression line to a plot of normalized concentrations versus elapsed time, requiring no knowledge of aquifer porosity, dispersivity, or hydraulic conductivity. A semi-analytical solution to the advective-dispersion equation is derived and used in a sensitivity analysis to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a homogeneous, confined aquifer with a fully-penetrating injection/extraction well and varying porosity, dispersivity, test duration, and reaction rate. A numerical flow and transport code (SUTRA) is used to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a heterogeneous, unconfined aquifer with a partially penetrating well. In all cases the simplified method provides accurate estimates of reaction rate coefficients; estimation errors ranged from 0.1 to 8.9% with most errors less than 5%

The Finite ElementMethod: Its Basis and Fundamentals offers a complete introduction to the basis of the finite elementmethod, covering fundamental theory and worked examples in the detail required for readers to apply the knowledge to their own engineering problems and understand more advanced applications. This edition sees a significant rearrangement of the book's content to enable clearer development of the finite elementmethod, with major new chapters and sections added to cover: Weak forms Variational forms Multi-dimensional field prob

Full Text Available A simple and valid in-situ measurement method of effective diffusion coefficient of radon and thoron in soil and other porous materials was designed. The analysis of numerical investigation of radon and thoron transport in upper layers of soil revealed that thoron flux density from the earth surface does not depend on soil gas advective velocity and varies only with diffusion coefficient changes. This result showed the advantages of thoron using versus radon using in the suggested method. The comparison of the new method with existing ones previously developed. The method could be helpful for solving of problems of radon mass-transport in porous media and gaseous exchange between soil and atmosphere.

We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog ⁡ N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.

Here, we introduce a simple finite elementmethod for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.

Mar 14, 2018 ... Abstract. A spectral element approximation of acoustic propagation problems combined with a new mapping method on irregular domains is proposed. Following this method, the Gauss–Lobatto–Chebyshev nodes in the standard space are applied to the spectral elementmethod (SEM). The nodes in the ...

Presented is the most common elementmethods used for analysis in engineering. The methods are discussed in an overall and general manner so that engineers and scientists who are increasingly, called upon to use elementmethods to support and check their analyses and/or designs can appreciate the essential ...

A spectral element approximation of acoustic propagation problems combined with a new mapping method on irregular domains is proposed. Following this method, the Gauss–Lobatto–Chebyshev nodes in the standard space are applied to the spectral elementmethod (SEM). The nodes in the physical space are ...

We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of cross points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.

A distributed preconditioned conjugate gradient method for finite element analysis has been developed and implemented on a parallel SIMD Quadrics computer. The main characteristic of the method is that it does not require any actual assembling of all element equations in a global system. The physical domain of the problem is partitioned in cells of n p finite elements and each cell element is assigned to a different node of an n p -processors machine. Element stiffness matrices are stored in the data memory of the assigned processing node and the solution process is completely executed in parallel at element level. Inter-element and therefore inter-processor communications are required once per iteration to perform local sums of vector quantities between neighbouring elements. A prototype implementation has been tested on an 8-nodes Quadrics machine in a simple 2D benchmark problem

Spent fuels are dissolved in nitric acid, the obtained dissolution liquid is oxidized by electrolysis, and nitric acid of transuranium elements are precipitated together with nitric acid of uranium elements from the dissolution solution and recovered. Namely, the transuranium elements are oxidized to an atomic value level at which nitric acid can be precipitated by an oxidizing catalyst, and cooled to precipitate nitric acid of transuranium elements together with nitric acid of transuranium elements, accordingly, it is not necessary to use a solvent which has been used so far upon recovering transuranium elements. Since no solvent waste is generated, a recovery method taking the circumstance into consideration can be provided. Further, nitric acid of uranium elements and nitric acid of transuranium elements precipitated and recovered together are dissolved in nitric acid again, cooled and only uranium elements are precipitated selectively, and recovered by filtration. The amount of wastes can be reduced to thereby enabling to mitigate control for processing. (N.H.)

Full Text Available In this article, we study a Cauchy problem for an elliptic equation with variable coefficients. It is well-known that such a problem is severely ill-posed; i.e., the solution does not depend continuously on the Cauchy data. We propose a modified quasi-boundary value regularization method to solve it. Convergence estimates are established under two a priori assumptions on the exact solution. A numerical example is given to illustrate our proposed method.

The space-time Conservation Element and Solution Element (CESE) method for solving conservation laws is examined for its development motivation and design requirements. The characteristics of the resulting scheme are discussed. The discretization of the Euler equations is presented to show readers how to construct a scheme based on the CESE method. The differences and similarities between the CESE method and other traditional methods are discussed. The strengths and weaknesses of the method are also addressed.

Reflector elements made from metal beryllium is widely used as neutron reflectors to increase neutron flux in test reactors. When beryllium reflector elements are irradiated by neutron, bending of reflector elements caused by swelling occurs, and beryllium reflector elements must be replaced in several years. In this report, literature search and investigation for non-destructive inspection of Beryllium and experiments for Preliminary inspection to establish post irradiation examination method for research of characteristics of metal beryllium under neutron irradiation were reported. (author)

The HTTR temperature coefficients required for the core dynamics calculations had been calculated from the HTTR core calculation results by the diffusion code with which the corrections had been performed using the core calculation results by the Monte-Carlo code MVP. This calculation method for the temperature coefficients was considered to have some issues to be improved. Then, the calculation method was improved to obtain the temperature coefficients in which the corrections by the Monte-Carlo code were not required. Specifically, from the point of view of neutron spectrum calculated by lattice calculations, the lattice model was revised which had been used for the calculations of the temperature coefficients. The HTTR core calculations were performed by the diffusion code with the group constants which were generated by the lattice calculations with the improved lattice model. The core calculations and the lattice calculations were performed by the SRAC code system. The HTTR core dynamics calculation was performed with the temperature coefficient obtained from the core calculation results. In consequence, the core dynamics calculation result showed good agreement with the experimental data and the valid temperature coefficient could be calculated only by the diffusion code without the corrections by Monte-Carlo code. (author)

A method was developed and used to determine radon diffusion coefficients in compacted soils by transient-diffusion measurements. A relative standard deviation of 12% was observed in repeated measurements with a dry soil by the transient-diffusion method, and a 40% uncertainty was determined for moistures exceeding 50% of saturation. Excellent agreement was also obtained between values of the diffusion coefficient for radon in air, as measured by the transient-diffusion method, and those in the published literature. Good agreement was also obtained with diffusion coefficients measured by a steady-state method on the same soils. The agreement was best at low moistures, averaging less than ten percent difference, but differences of up to a factor of two were observed at high moistures. The comparison of the transient-diffusion and steady-state methods at low moistures provides an excellent verification of the theoretical validity and technical accuracy of these approaches, which are based on completely independent experimental conditions, measurement methods and mathematical interpretations

A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.

We propose a novel diagrammatic method for computing transport coefficients in relativistic quantum field theory. Our method is based on a reformulation and extension of the diagrammatic method by Eliashberg given in the imaginary-time formalism to the relativistic quantum field theory in the real-time formalism, in which the cumbersome analytical continuation problem can be avoided. The transport coefficients are obtained from a two-point function via Kubo formula. It is know that naive perturbation theory breaks down owing to a so called pinch singularity, and hence a resummation is required for getting a finite and sensible result. As a novel resummation method, we first decompose the two point function into the singular part and the regular part, and then reconstruct the diagrams. We find that a self-consistent equation for the two-point function has the same structure as the linearized Boltzmann equation. It is known that the two-point function at the leading order is equivalent to the linearized Boltzmann equation. We find the higher order corrections are nicely summarized as a renormalization of the vertex function, spectral function, and collision term. We also discuss the critical behavior of the transport coefficients near a phase transition, applying our method. (author)

Hydrothermal zircon grains have trace element characteristics such as low Th/U, high U, and high rare earth element (REE) concentrations that distinguish them from magmatic, metamorphic, and altered zircon grains, but it is unclear whether these characteristics result from distinctive fluid compositions or zircon/fluid fractionation effects. New experiments aimed at measuring zircon/fluid trace element partition coefficients Dz/f involved recrystallizing natural Mud Tank zircon with low trace element concentrations in the presence of H2O, 1 m NaOH, or 1 m HCl doped with ∼1000 ppm of rare earth elements (REE), Y, U and Th and ∼500 ppm of Li, B, P, Nb, Ba, Hf, and Ta. Experiments were run for 168 h at 1.5 GPa, 800-1000 °C, and fO2 = NNO in a piston cylinder apparatus using the double capsule method. LA-ICP-MS analysis shows that run product zircon crystals have much higher trace element concentrations than in Mud Tank zircon starting material. Dz/f values were estimated from run product zircon analyses and bulk composition using mass balance. Most elements behave incompatibly, with median Dz/f being highest for Hf = 8 and lowest for B = 0.02. Addition of NaOH or HCl had little influence on Dz/f values. Dz/f for LREE are anomalously high, likely due to contamination of run product zircon with quenched solutes enriched in incompatible elements, so DLREE were estimated using lattice strain theory. Brice curves for +3 ions yield zircon/fluid DLu/DLa of ∼800-5000. A Brice curve fit to +4 ions yielded DCe4+ values. Estimated concentrations of Ce3+ and Ce4+ show that the average Ce4+/Ce3+ in zircon of 27 is much higher than in fluid of 0.02. Th and U show little fractionation, with median DTh/DU = 0.7, indicating that the low Th/U in natural hydrothermal zircon is inherited from the fluid. Natural fluid compositions estimated from measured Dz/f and published compositions of hydrothermal zircon grains from aplite and eclogite reflect the mineralogy of the host rock, e

This document contains working annotations on the Virtual ElementMethod (VEM) for the approximate solution of diffusion problems with variable coefficients. To read this document you are assumed to have familiarity with concepts from the numerical discretization of Partial Differential Equations (PDEs) and, in particular, the Finite ElementMethod (FEM). This document is not an introduction to the FEM, for which many textbooks (also free on the internet) are available. Eventually, this document is intended to evolve into a tutorial introduction to the VEM (but this is really a long-term goal).

This paper concerns the comparison of the performance of the Spectral ElementMethod (SEM) and the Finite ElementMethod (FEM) for a magnetostatic problem. The convergence of the vector magnetic potential, the magnetic flux density, and the total stored energy in the system is compared with the

This paper concerns the comparison of the performance of the Spectral ElementMethod (SEM) and the Finite ElementMethod (FEM) for modeling a magnetostatic problem. The convergence of the vector magnetic potential, the magnetic flux density, and the total stored energy in the system is compared with

A good dynamics model is essential and critical for the successful design of navigation and control system of an underwater vehicle. However, it is difficult to determine from the hydrodynamic forces, the inertial added mass terms and the drag coefficients. In this paper, a new experimental method has been used to find the hydrodynamic forces for the ROV II, a remotely operated underwater vehicle. The proposed method is based on the classical free decay test, but with the spring oscillation replaced by a pendulum motion. The experiment results determined from the free decay test of a scaled model compared well with the simulation results obtained from well‐established computational fluid dynamics (CFD) program. Thus, the proposed approach can be used to find the added mass and drag coefficients for other underwater vehicles.

A good dynamics model is essential and critical for the successful design of navigation and control system of an underwater vehicle. However, it is difficult to determine from the hydrodynamic forces, the inertial added mass terms and the drag coefficients. In this paper, a new experimental method has been used to find the hydrodynamic forces for the ROV II, a remotely operated underwater vehicle. The proposed method is based on the classical free decay test, but with the spring oscillation replaced by a pendulum motion. The experiment results determined from the free decay test of a scaled model compared well with the simulation results obtained from well‐established computational fluid dynamics (CFD) program. Thus, the proposed approach can be used to find the added mass and drag coefficients for other underwater vehicles.

A simple method is proposed for determining the mode coupling coefficient D in graded index multimode optical fibers. It only requires observation of the output modal power distribution P(m, z) for one fiber length z as the Gaussian launching modal power distribution changes, with the Gaussian input light distribution centered along the graded index optical fiber axis (θ0 = 0) without radial offset (r0 = 0). A similar method we previously proposed for calculating the coupling coefficient D in a step-index multimode optical fibers where the output angular power distributions P(θ, z) for one fiber length z with the Gaussian input light distribution launched centrally along the step-index optical fiber axis (θ0 = 0) is needed to be known.

Flow coefficients applicable to area-weighted pitot rake mass flow rate measurements are presented for fully developed, turbulent flow in an annulus. A turbulent velocity profile is generated semiempirically for a given annulus hub-to-tip radius ratio and integrated numerically to determine the ideal mass flow rate. The calculated velocities at each probe location are then summed, and the flow rate as indicated by the rake is obtained. The flow coefficient to be used with the particular rake geometry is subsequently obtained by dividing the ideal flow rate by the rake-indicated flow rate. Flow coefficients ranged from 0.903 for one probe placed at a radius dividing two equal areas to 0.984 for a 10-probe area-weighted rake. Flow coefficients were not a strong function of annulus hub-to-tip radius ratio for rakes with three or more probes. The semiempirical method used to generate the turbulent velocity profiles is described in detail.

The mathematical apparatus and the experimental installation for the rapid determination of radon diffusion coefficient in various materials are developed. The single test lasts not longer than 18 h and allows testing numerous materials, such as gaseous and liquid media, as well as soil, concrete and radon-proof membranes, in which diffusion coefficient of radon may vary in an extremely wide range, from 1·10 −12 to 5·10 −5 m 2 /s. The uncertainty of radon diffusion coefficient estimation depends on the permeability of the sample and varies from about 5% (for the most permeable materials) to 40% (for less permeable materials, such as radon-proof membranes). - Highlights: • The new method and installation for determination of radon diffusion coefficient D are developed. • The measured D-values vary in an extremely wide range, from 5×10 -5 to 1×10 -12 m 2 /s. • The materials include water, air, soil, building materials and radon-proof membranes. • The duration of the single test does not exceed 18 hours. • The measurement uncertainty varies from 5% (in permeable materials) to 40% (in radon gas barriers)

A neural network was trained with data for the frequency response function between in-core neutron noise and core-exit thermocouple noise in a pressurized water reactor, with the moderator temperature coefficient (MTC) as target. The trained network was subsequently used to predict the MTC at other points in the same fuel cycle. Results support use of the method for operating pressurized water reactors provided noise data can be accumulated for several fuel cycles to provide a training base

The present standard aims at defining a method to control the scrubbing coefficient of radioactive iodine trapping systems, used in nuclear ventilation installations. It applies to the installations where the trapping, efficiency of radioactive iodine has to be known, tested and compared to a reference value generally included in the safety reports. It applies to the installations where the absolute pressure of the air in the ventilation systems is above 1,4. 10 5 Pa (1,4 Bar) [fr

The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.

Diffusion is a crucial mechanism that regulates the migration of radioactive nuclides. In this study, an innovative numerical method was developed to simultaneously calculate the diffusion coefficient of both parent and, afterward, series daughter nuclides in a sequentially reactive through-diffusion model. Two constructed scenarios, a serial reaction (RN{sub 1} → RN{sub 2} → RN{sub 3}) and a parallel reaction (RN{sub 1} → RN{sub 2}A + RN{sub 2}B), were proposed and calculated for verification. First, the accuracy of the proposed three-member reaction equations was validated using several default numerical experiments. Second, by applying the validated numerical experimental concentration variation data, the as-determined diffusion coefficient of the product nuclide was observed to be identical to the default data. The results demonstrate the validity of the proposed method. The significance of the proposed numerical method will be particularly powerful in determining the diffusion coefficients of systems with extremely thin specimens, long periods of diffusion time, and parent nuclides with fast decay constants.

The radon diffusion coefficient and the free radon production rate are important parameters for describing radon migration in the fragmented uranium ore. In order to determine the two parameters, the pure diffusion migration equation for radon was firstly established and its analytic solution with the two parameters to be determined was derived. Then, a self manufactured experimental column was used to simulate the pure diffusion of the radon, the improved scintillation cell method was used to measure the pore radon concentrations at different depths of the column loaded with the fragmented uranium ore, and the nonlinear least square algorithm was used to inversely determine the radon diffusion coefficient and the free radon production rate. Finally, the solution with the two inversely determined parameters was used to predict the pore radon concentrations at some depths of the column, and the predicted results were compared with the measured results. The results show that the predicted results are in good agreement with the measured results and the numerical inverse method is applicable to the determination of the radon diffusion coefficient and the free radon production rate for the fragmented uranium ore. - Highlights: • Inverse method for determining two transport parameters of radon is proposed. • A self-made experimental apparatus is used to simulate radon diffusion process. • Sampling volume and position for measuring radon concentration are optimized. • The inverse results of an experimental sample are verified

The application of the finite elementmethod to solve a realistic one-or-two energy group, multiregion, three-dimensional static neutron diffusion problem is studied. Linear, quadratic, and cubic serendipity box-shape elements are used. The resulting sets of simultaneous algebraic equations with thousands of unknowns are solved by the conjugate gradient method, without forming the large coefficient matrix explicitly. This avoids the complicated data management schemes to store such a large coefficient matrix. Three finite-element computer programs: FEM-LINEAR, FEM-QUADRATIC and FEM-CUBIC were developed, using the linear, quadratic, and cubic box-shape elements respectively. They are self-contained, using simple nodal labeling schemes, without the need for separate finite element mesh generating routines. The efficiency and accuracy of these computer programs are then compared among themselves, and with other computer codes. The cubic element model is not recommended for practical usage because it gives almost identical results as the quadratic model, but it requires considerably longer computation time. The linear model is less accurate than the quadratic model, but it requires much shorter computation time. For a large 3-D problem, the linear model is to be preferred since it gives acceptable accuracy. The quadratic model may be used if improved accuracy is desired

Full Text Available The paper presents a method for the analysis of nonlinear behaviour of reinforced concrete bent elements subjected to short-term static load. The considerations in the range of modelling of deformation processes of reinforced concrete element were carried out. The method of structure effort analysis was developed using the finite difference method. The Dynamic Relaxation Method, which — after introduction of critical damping — allows for description of the static behaviour of a structural element, was used to solve the system of nonlinear equilibrium equations. In order to increase the method effectiveness in the range of the post-critical analysis, the Arc Length Parameter on the equilibrium path was introduced into the computational procedure.[b]Keywords[/b]: reinforced concrete elements, physical nonlinearity, geometrical nonlinearity, dynamic relaxation method, arc-length method

Full Text Available In order to improve the boundary mesh quality while maintaining the essential characteristics of discrete surfaces, a new approach combining optimization-based smoothing and topology optimization is developed. The smoothing objective function is modified, in which two functions denoting boundary and interior quality, respectively, and a weight coefficient controlling boundary quality are taken into account. In addition, the existing smoothing algorithm can improve the mesh quality only by repositioning vertices of the interior mesh. Without destroying boundary conformity, bad elements with all their vertices on the boundary cannot be eliminated. Then, topology optimization is employed, and those elements are converted into other types of elements whose quality can be improved by smoothing. The practical application shows that the worst elements can be eliminated and, with the increase of weight coefficient, the average quality of boundary mesh can also be improved. Results obtained with the combined approach are compared with some common approach. It is clearly shown that it performs better than the existing approach.

A finite elementmethod is introduced for solving the neutron transport equations. Our method falls into the category of Petrov-Galerkin solution, since the trial space differs from the test space. The close relationship between this method and the discrete ordinate method is discussed, and the methods are compared for simple test problems

The results of studies on 18 elements in the samples of hugh elecampane, middle-asian mint, field horsetail, mixed grass crop and turkestan dog rose fruits, collected at the two stationary sites A and B of the Bashkizylsaj area of the Chatkalsk biospheric reservation and studied through the neutron-activation (n), γ-activation (γ), X-ray spectral (p) and X-ray fluorescence (x) physical methods, are presented. The root-square errors of the results, obtained by different method and differences in the elements accumulation, depending on the plant type, their part and vegetation place, are evaluated. The coefficients of biological accumulation of 15 elements in the 15 plants under study are determined on the basis of the data on the elements content in plants and corresponding soil samples [ru

Control volume finite elementmethods (CVFEM) bridge the gap between finite difference and finite elementmethods, using the advantages of both methods for simulation of multi-physics problems in complex geometries. In Hydrothermal Analysis in Engineering Using Control Volume Finite ElementMethod, CVFEM is covered in detail and applied to key areas of thermal engineering. Examples, exercises, and extensive references are used to show the use of the technique to model key engineering problems such as heat transfer in nanofluids (to enhance performance and compactness of energy systems),

We consider the multigrid solution of linear equations arising within the discretization of elliptic second order boundary value problems of the form by mixed hybrid finite elements. Using the equivalence of mixed hybrid finite elements and non-conforming nodal finite elements, we construct a multigrid scheme for the corresponding non-conforming finite elements, and, by this equivalence, for the mixed hybrid finite elements, following guidelines from Arbogast/Chen. For a rectangular triangulation of the computational domain, this non-conforming schemes are the so-called nodal finite elements. We explicitly construct prolongation and restriction operators for this type of non-conforming finite elements. We discuss the use of plain multigrid and the multilevel-preconditioned cg-method and compare their efficiency in numerical tests.

Full Text Available In this study, the effect of drying method on the permeability coefficient of the oak wood (Quercus infectoria Oliv was studied. Freshly-cut logs of oak were prepared from Oureman, the east area of Kourdistan in Iran. Then, boards with nominal thickness of 6 cm were cut. The boards were dried using two methods. In the first method, the boards were air dried to the moisture content close to FSP for 45 days and then they were kiln dried using T5-D1 schedule. In the second method, the boards were dried from green condition to the final moisture content of 10% using T5-D1 schedule. Then, the permeability coefficient in the transverse and longitudinal directions in both heartwood and sapwood regions was measured, separately. Results showed that the permeability of oak boards dried by kiln drying method both in the transverse and longitudinal directions and also in the heartwood and sapwood regions was greater than that of those dried by the combined method (air drying + kiln drying.

This study compares two methods for estimating static friction coefficients for skin. In the first method, referred to as the 'tilt method', a hand supporting a flat object is tilted until the object slides. The friction coefficient is estimated as the tangent of the angle of the object at the slip. The second method estimates the friction coefficient as the pull force required to begin moving a flat object over the surface of the hand, divided by object weight. Both methods were used to estimate friction coefficients for 12 subjects and three materials (cardboard, aluminium, rubber) against a flat hand and against fingertips. No differences in static friction coefficients were found between the two methods, except for that of rubber, where friction coefficient was 11% greater for the tilt method. As with previous studies, the friction coefficients varied with contact force and contact area. Static friction coefficient data are needed for analysis and design of objects that are grasped or manipulated with the hand. The tilt method described in this study can easily be used by ergonomic practitioners to estimate static friction coefficients in the field in a timely manner.

A gridless technique called smooth particle hydrodynamics (SPH) has been coupled with the transient dynamics finite element code ppercase[pronto]. In this paper, a new weighted residual derivation for the SPH method will be presented, and the methods used to embed SPH within ppercase[pronto] will be outlined. Example SPH ppercase[pronto] calculations will also be presented. One major difficulty associated with the Lagrangian finite elementmethod is modeling materials with no shear strength; for example, gases, fluids and explosive biproducts. Typically, these materials can be modeled for only a short time with a Lagrangian finite element code. Large distortions cause tangling of the mesh, which will eventually lead to numerical difficulties, such as negative element area or ''bow tie'' elements. Remeshing will allow the problem to continue for a short while, but the large distortions can prevent a complete analysis. SPH is a gridless Lagrangian technique. Requiring no mesh, SPH has the potential to model material fracture, large shear flows and penetration. SPH computes the strain rate and the stress divergence based on the nearest neighbors of a particle, which are determined using an efficient particle-sorting technique. Embedding the SPH method within ppercase[pronto] allows part of the problem to be modeled with quadrilateral finite elements, while other parts are modeled with the gridless SPH method. SPH elements are coupled to the quadrilateral elements through a contact-like algorithm. ((orig.))

In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's...

Aerosol extinction coefficient profile is an essential parameter for atmospheric radiation model. But it is difficult to get the full aerosol extinction profile from the ground to the tropopause especially in near ground precisely using backscattering lidar. A combined measurement of side-scattering, backscattering and Raman-scattering lidar is proposed to retrieve the aerosol extinction coefficient profile from the surface to the tropopause which covered a dynamic range of 5 orders. The side-scattering technique solves the dead zone and the overlap problem caused by the traditional lidar in the near range. Using the Raman-scattering the aerosol lidar ratio (extinction to backscatter ratio) can be obtained. The cases studies in this paper show the proposed method is reasonable and feasible.

This work is concerned with the magnetohydrodynamic (MHD) viscous flow due to a porous stretching sheet. The similarity solution of the problem is obtained using finite elementmethod. The physical quantities of interest like the fluid velocity and skin friction coefficient is obtained and discussed under the influence of suction parameter and Hartman number. It is evident from the results that MHD can be used to control the boundary layer thickness. (author)

Error analysis is very important to experimental designs. The error analysis of the determination of activity coefficients for a binary system via the isopiestic method shows that the error sources include not only the experimental errors of the analyzed molalities and the measured osmotic coefficients, but also the deviation of the regressed values from the experimental data when the regression function is used. It also shows that the accurate chemical analysis of the molality of the test solution is important, and it is preferable to keep the error of the measured osmotic coefficients changeless in all isopiestic experiments including those experiments on the very dilute solutions. The isopiestic experiments on the dilute solutions are very important, and the lowest molality should be low enough so that a theoretical method can be used below the lowest molality. And it is necessary that the isopiestic experiment should be done on the test solutions of lower than 0.1 mol . kg -1 . For most electrolytes solutions, it is usually preferable to require the lowest molality to be less than 0.05 mol . kg -1 . Moreover, the experimental molalities of the test solutions should be firstly arranged by keeping the interval of the logarithms of the molalities nearly constant, and secondly more number of high molalities should be arranged, and we propose to arrange the experimental molalities greater than 1 mol . kg -1 according to some kind of the arithmetical progression of the intervals of the molalities. After experiments, the error of the calculated activity coefficients of the solutes could be calculated from the actually values of the errors of the measured isopiestic molalities and the deviations of the regressed values from the experimental values with our obtained equations

The Lattice Boltzmann Method (LBM) has been developed for application to thermal-fluid problems. Most of the those studies considered a regular shape of lattice or mesh like square and cubic grids. In order to apply the LBM to more practical cases, it is necessary to be able to solve complex or irregular shapes of problem domains. Some techniques were based on the finite elementmethod. Generally, the finite elementmethod is very powerful for solving two or three-dimensional complex or irregular shapes of domains using the iso-parametric element formulation which is based on a mathematical mapping from a regular shape of element in an imaginary domain to a more general and irregular shape of element in the physical domain. In addition, the element free technique is also quite useful to analyze a complex shape of domain because there is no need to divide a domain by a compatible finite element mesh. This paper presents a new finite element and element free formulations for the lattice Boltzmann equation using the general weighted residual technique. Then, a series of validation examples are presented

The Lattice Boltzmann Method (LBM) has been developed for application to thermal-fluid problems. Most of the those studies considered a regular shape of lattice or mesh like square and cubic grids. In order to apply the LBM to more practical cases, it is necessary to be able to solve complex or irregular shapes of problem domains. Some techniques were based on the finite elementmethod. Generally, the finite elementmethod is very powerful for solving two or three-dimensional complex or irregular shapes of domains using the iso-parametric element formulation which is based on a mathematical mapping from a regular shape of element in an imaginary domain to a more general and irregular shape of element in the physical domain. In addition, the element free technique is also quite useful to analyze a complex shape of domain because there is no need to divide a domain by a compatible finite element mesh. This paper presents a new finite element and element free formulations for the lattice Boltzmann equation using the general weighted residual technique. Then, a series of validation examples are presented.

The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.

The purpose of the present paper is to establish a new void reactivity coefficient (VRC) estimation method based on gray box modeling concept. The gray box model consists of a point kinetics model as the first principle model and a fitting model of moderator temperature kinetics. Applying Kalman filter and maximum likehood estimation algorithms to the gray box model, MTC can be estimated. The verification test is done by Monte Carlo simulation, and, it is shown that the present method gives the best estimation results comparing with the conventional methods from the viewpoints of non-biased and smallest scattering estimation performance. Furthermore, the method is verified via real plant data analysis. The reason of good performance of the present method is explained by proper definition of likelihood function based on explicit expression of observation and system noise in the gray box model. (author)

A finite element formulation for a digital image correlation method is presented that will determine directly the complete, two-dimensional displacement field during the image correlation process on digital images. The entire interested image area is discretized into finite elements that are involved in the common image correlation process by use of our algorithms. This image correlation method with finite element formulation has an advantage over subset-based image correlation methods because it satisfies the requirements of displacement continuity and derivative continuity among elements on images. Numerical studies and a real experiment are used to verify the proposed formulation. Results have shown that the image correlation with the finite element formulation is computationally efficient, accurate, and robust

The invention relates to a method of constructing, at the site of use, a building wall (1) or a building floor (1) using a plurality of prefabricated concrete or lightweight concrete plate-shaped wall of floor elements (10), in particular cast elements, which have a front side and a rear side...

We establish basic stability estimates for a non-conforming ℎ- spectral elementmethod which allows for simultaneous mesh refinement and variable polynomial degree. The spectral element functions are non-conforming if the boundary conditions are Dirichlet. For problems with mixed boundary conditions they are ...

We find that with uniform mesh, the numerical schemes derived from finite elementmethod can keep a preserved symplectic structure in one-dimensional case and a preserved multisymplectic structure in two-dimensional case respectively. These results are in fact the intrinsic reason why the numerical experiments show that such finite element algorithms are accurate in practice.``

Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite elementmethod, lagrangian and hermitian elements.

Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223

Full Text Available Using the simple method of index numbers, and synthesizing the originality of his excellent statistical thinking by definition, this article identifies and presents an inimitable shortcut from Index – Numbers’ method to elasticity method. A final remark underlines the beauty and the rigour of this scientific demarche specific for the statistical thinking. This paper is a real homage addressed to Professor M. C. Demetrescu, and to his remarkable PhD thesis, printed approximately half a century ago, one of the best statistic and economic book about population demand.

Intervention actions in case of radiological emergencies and exploratory radiological surveys require rapid methods for the evaluation of the range and extent of contamination. When simple and homogeneous radionuclide composition characterize the radioactive contamination, surrogate measurements can be used to reduce the costs implied by laboratory analyses and to speed-up the process of decision support. A dose-rate measurement-based methodology can be used in conjunction with adequate dose coefficients to assess radionuclide inventories and to calculate dose projections for various intervention scenarios. The paper presents the results obtained for dose coefficients in some particular exposure geometries and the methodology used for deriving dose rate guidelines from activity concentration upper levels specified as contamination limits. All calculations were performed by using the commercial software MicroShield from Grove Software Inc. A test case was selected as to meet the conditions from EPA Federal Guidance Report no. 12 (FGR12) concerning the evaluation of dose coefficients for external exposure from contaminated soil and the obtained results were compared to values given in the referred document. The geometries considered as test cases are: contaminated ground surface; - infinite extended homogeneous surface contamination and soil contaminated to a depth of 15 cm. As shown by the results, the values agree within 50% relative difference for most of the cases. The greatest discrepancies were observed for depth contamination simulation and in the case of radionuclides with complicated gamma emission and this is due to the different approach from MicroShield and FGR12. A case study is presented for validation of the methodology, where both dose rate measurements and laboratory analyses were performed on an extended quasi-homogeneous NORM contamination. The dose rate estimations obtained by applying the dose coefficients to the radionuclide concentrations

We consider a few different preconditioners for the linear systems arising from the discretization of 3-D convection-diffusion problems with the finite volume elementmethod. Their theoretical and computational convergence rates are compared and discussed.

A method for producing a stable ceramic composition having a surface with a low friction coefficient and high wear resistance at high operating temperatures. A first deposition of a thin film of a metal ion is made upon the surface of the ceramic composition and then a first ion implantation of at least a portion of the metal ion is made into the near surface region of the composition. The implantation mixes the metal ion and the ceramic composition to form a near surface composite. The near surface composite is then oxidized sufficiently at high oxidizing temperatures to form an oxide gradient layer in the surface of the ceramic composition.

The differential formalism and the Generalized Perturbation Theory (GPT) are applied to sensitivity analysis of thermal-hydraulics problems related to pressurized water reactor cores. The equations describing the thermal-hydraulic behavior of these reactors cores, used in COBRA-IV-I code, are conveniently written. The importance function related to the response of interest and the sensitivity coefficient of this response with respect to various selected parameters are obtained by using Differential and Generalized Perturbation Theory. The comparison among the results obtained with the application of these perturbative methods and those obtained directly with the model developed in COBRA-IV-I code shows a very good agreement. (author)

Full Text Available La transferencia de calor incrementada por métodos pasivos se emplea en diversosintercambiadores de calor de alta efectividad. El objetivo del trabajo presentado fue la evaluación del estado de las investigaciones en el campo de la transferencia de calor mejorada en espacios anulares, a partir del empleo de elementos turbulizadores helicoidales como técnicas pasivas. La revisión se centró en el empleo de láminas helicoidales y espirales, la obtención de ecuaciones de correlación del coeficiente de transferencia de calor incrementado, el coeficiente de fricción y la evaluación que se realiza de este proceso por parte de diferentes autores. El análisis crítico permitió realizar valoraciones integradas y recomendar sobre los aspectos que podrían ser analizados en el futuro en esta temática.Palabras claves: transferencia de calor incrementada, láminas helicoidales, espirales, espacios anulares, métodos pasivos._______________________________________________________________________________AbstractThe transfer enhancement by passive methods is used in several heat exchanger of high effectiveness. The objective of the presented work was the evaluation of the state of the investigations in heat transfer enhancement in annular spaces, from the employment of elements helical. The revision was centered in the employment of twisted tape and wire coil in spiral, the equations of correlation obtained of the coefficient of transfer of increased heat, the coefficient of friction and the evaluation that was carried out of this process on the part of different authors. From the critical analysis of the published results, the authors recommend on the topics that can be analyzed in the future in this area.Key words: heat transfer enhancement, twisted tape, helical springs, annular spaces, passive methods.

The paper presents a method for finding the coefficient of rolling friction using an evolvent pendulum. The pendulum consists in a fixed cylindrical body and a mobile body presenting a plane surface in contact with a cylindrical surface. The mobile body is placed over the fixed one in an equilibrium state; after applying a small impulse, the mobile body oscillates. The motion of the body is video recorded and afterwards the movie is analyzed by frames and the decrease with time of angular amplitude of the pendulum is found. The equation of motion is established for oscillations of the mobile body. The equation of motion, differential nonlinear, is integrated by Runge-Kutta method. Imposing the same damping both to model’s solution and to theoretical model, the value of coefficient of rolling friction is obtained. The last part of the paper presents results for actual pairs of materials. The main advantage of the method is the fact that the dimensions of contact regions are small, of order a few millimeters, and thus is substantially reduced the possibility of variation of mechanical characteristic for the two surfaces.

Full Text Available Many types of techniques for process tomography were proposed and developed during the past 20 years. This paper review the techniques and the current state of knowledge and experience on the subject, aimed at highlighting the problems associated with the non finite elementmethods, such as the ill posed, ill conditioned which relates to the accuracy and sensitivity of measurements. In this paper, considerations for choice of sensors and its applications were outlined and descriptions of non finite element tomography systems were presented. The finite elementmethod tomography system as obtained from recent works, suitable for process control and measurement were also presented.

The spectral/ hp elementmethod combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp elementmethod and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp elementmethod in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp elementmethod in more complex science and engineering applications are discussed.

In this talk I will provide a survey of recent research efforts on the application of quasi-Monte Carlo (QMC) methods to PDEs with random coefficients. Such PDE problems occur in the area of uncertainty quantification. In recent years many papers have been written on this topic using a variety of methods. QMC methods are relatively new to this application area. I will consider different models for the randomness (uniform versus lognormal) and contrast different QMC algorithms (single-level versus multilevel, first order versus higher order, deterministic versus randomized). I will give a summary of the QMC error analysis and proof techniques in a unified view, and provide a practical guide to the software for constructing QMC points tailored to the PDE problems.

Experimental tests were conducted to demonstrate the ability of the influence coefficientmethod to achieve precise balance of flexible rotors of virtually any design for operation through virtually any speed range. Various practical aspects of flexible-rotor balancing were investigated. Tests were made on a laboratory quality machine having a 122 cm (48 in.) long rotor weighing 50 kg (110 lb) and covering a speed range up to 18000 rpm. The balancing method was in every instance effective, practical, and economical and permitted safe rotor operation over the full speed range covering four rotor bending critical speeds. Improved correction weight removal methods for rotor balancing were investigated. Material removal from a rotating disk was demonstrated through application of a commercially available laser.

In this paper, a FEM-based (finite elementmethod) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite elementmethod is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

In this paper, a FEM-based (finite elementmethod) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite elementmethod is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

The Applied ElementMethod (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite ElementMethod (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.

We describe in this paper the fundamentals of Linear Finite ElementMethod (LFEM) applied to one-speed diffusion problems in slab geometry. We present the mathematical formulation to solve eigenvalue and fixed source problems. First, we discretized a calculus domain using a finite set of elements. At this point, we obtain the spatial balance equations for zero order and first order spatial moments inside each element. Then, we introduce the linear auxiliary equations to approximate neutron flux and current inside the element and architect a numerical scheme to obtain the solution. We offer numerical results for fixed source typical model problems to illustrate the method's accuracy for coarse-mesh calculations in homogeneous and heterogeneous domains. Also, we compare the accuracy and computational performance of LFEM formulation with conventional Finite Difference Method (FDM). (author)

The rainflow method is used for counting fatigue cycles from a stress response time history, where the fatigue cycles are stress-reversals. The rainflow method allows the application of Palmgren-Miner's rule in order to assess the fatigue life of a structure subject to complex loading. The fatigue damage may also be calculated from a stress response power spectral density (PSD) using the semi-empirical Dirlik, Single Moment, Zhao-Baker and other spectral methods. These methods effectively assume that the PSD has a corresponding time history which is stationary with a normal distribution. This paper shows how the probability density function for rainflow stress cycles can be extracted from each of the spectral methods. This extraction allows for the application of the MIL-HDBK-5J fatigue coefficients in the cumulative damage summation. A numerical example is given in this paper for the stress response of a beam undergoing random base excitation, where the excitation is applied separately by a time history and by its corresponding PSD. The fatigue calculation is performed in the time domain, as well as in the frequency domain via the modified spectral methods. The result comparison shows that the modified spectral methods give comparable results to the time domain rainflow counting method.

Earth's core contains approximately 10 percent light elements that are likely a combination of S, C, Si, and O, with Si possibly being the most abundant. Si dissolved into Fe liquids can have a large effect on the magnitude of the activity coefficient of siderophile elements (SE) in Fe liquids, and thus the partitioning behavior of those elements between core and mantle. The effect of Si can be small such as for Ni and Co, or large such as for Mo, Ge, Sb, As. The effect of Si on many siderophile elements is unknown yet could be an important, and as yet unquantified, influence on the core-mantle partitioning of SE. Here we report new experiments designed to quantify the effect of Si on the partitioning of P, Au, Pd, and many other SE between metal and silicate melt. The results will be applied to Earth, for which we have excellent constraints on the mantle siderophile element concentrations.

This paper presents an extension of the matrix elementmethod to next-to-leading order in perturbation theory. To accomplish this we have developed a method to calculate next-to-leading order weights on an event-by-event basis. This allows for the definition of next-to-leading order likelihoods in exactly the same fashion as at leading order, thus extending the matrix elementmethod to next-to-leading order. A welcome by-product of the method is the straightforward and efficient generation of...

Numerical experiments are presented on the finite elementmethod by Pletzer-Dewar for matching data of an ordinary differential equation with regular singular points by using model equation. Matching data play an important role in nonideal MHD stability analysis of a magnetically confined plasma. In the Pletzer-Dewar method, the Frobenius series for the 'big solution', the fundamental solution which is not square-integrable at the regular singular point, is prescribed. The experiments include studies of the convergence rate of the matching data obtained by the finite elementmethod and of the effect on the results of computation by truncating the Frobenius series at finite terms. It is shown from the present study that the finite elementmethod is an effective method for obtaining the matching data with high accuracy. (author)

The main objective of this work is to simulate electromagnetic fields using the Finite ElementMethod. Even in the easiest case of electrostatic and magnetostatic numerical simulation some problems appear when the nodal finite element is used. It is difficult to model vector fields with scalar functions mainly in non-homogeneous materials. With the aim to solve these problems two types of techniques are tried: the adaptive remeshing using nodal elements and the edge finite element that ensure the continuity of tangential components. Some numerical analysis of simple electromagnetic problems with homogeneous and non-homogeneous materials are performed using first, the adaptive remeshing based in various error indicators and second, the numerical solution of waveguides using edge finite element. (author)

We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

We consider the iterative method for solving a quadratic matrix equation with special coefficient matrices which arises in the quasi-birth-death problem. In this paper, we show that the elementwise minimal positive solvents to quadratic matrix equations can be obtained using Newton's method. We also prove that the convergence rate of the Newton iteration is quadratic if the Fréchet derivative at the elementwise minimal positive solvent is nonsingular. However, if the Fréchet derivative is singular, the convergence rate is at least linear. Numerical experiments of the convergence rate are given.(This is summarized a paper which is to appear in Honam Mathematical Journal.)

A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

The complex finite elementmethod (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions

Purpose: To enable easy elimination of claddings deposited on the surface of fuel element. Method: An operator manipulates a pole from above a platform, engages the longitudinal flange of the cover to the opening at the upper end of a channel box and starts up a suction pump. The suction amount of the pump is set such that water flow becomes within the channel box at greater flow rate than the operational flow rate in the channel box of the fuel element clusters during reactor operation. This enables to remove crud deposited on the surface of individual fuel elements with ease and rapidly without detaching the channel box. (Moriyama, K.)

Validated results are presented for the new 3D body of revolution finite element boundary integral code. A Fourier series expansion of the vector electric and mangnetic fields is employed to reduce the dimensionality of the system, and the exact boundary condition is employed to terminate the finite element mesh. The mesh termination boundary is chosen such that is leads to convolutional boundary operatores of low O(n) memory demand. Improvements of this code are discussed along with the proposed formulation for a full 3D implementation of the finite element boundary integral method in conjunction with a conjugate gradiant fast Fourier transformation (CGFFT) solution.

work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize

This monograph presents numerical methods for solving transient wave equations (i.e. in time domain). More precisely, it provides an overview of continuous and discontinuous finite elementmethods for these equations, including their implementation in physical models, an extensive description of 2D and 3D elements with different shapes, such as prisms or pyramids, an analysis of the accuracy of the methods and the study of the Maxwell’s system and the important problem of its spurious free approximations. After recalling the classical models, i.e. acoustics, linear elastodynamics and electromagnetism and their variational formulations, the authors present a wide variety of finite elements of different shapes useful for the numerical resolution of wave equations. Then, they focus on the construction of efficient continuous and discontinuous Galerkin methods and study their accuracy by plane wave techniques and a priori error estimates. A chapter is devoted to the Maxwell’s system and the important problem ...

Piezoelectric materials are extensively used in smart structures as sensors and actuators. In this paper, static analysis of three piezoelectric solids is done using general-purpose finite element software, Abaqus. The simulation results from Abaqus are compared with the results obtained using numerical methods like Boundary ElementMethod (BEM) and meshless point collocation method (PCM). The BEM and PCM are cumbersome for complex shape and complicated boundary conditions. This paper shows that the software Abaqus can be used to solve the governing equations of piezoelectric solids in a much simpler and faster way than the BEM and PCM.

A computer program to solve the Navier-Stokes equations by using the Finite ElementMethod is implemented. The solutions variables investigated are stream-function/vorticity in the steady case and velocity/pressure in the steady state and transient cases. For steady state flow the equations are solved simultaneously by the Newton-Raphson method. For the time dependent formulation, a fractional step method is employed to discretize in time and artificial viscosity is used to preclude spurious oscilations in the solution. The element used is the three node triangle. Some numerical examples are presented and comparisons are made with applications already existent. (Author) [pt

The extraction of hadron matrix elements in lattice QCD using the standard two- and threepoint correlator functions demands careful attention to systematic uncertainties. One of the most commonly studied sources of systematic error is contamination from excited states. We apply the variational method to calculate the axial vector current g_A, the scalar current g_S and the quark momentum fraction left angle x right angle of the nucleon and we compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.

Determination of the distribution coefficient (K d ) of the rhenium and tungsten conducted for the purpose of knowing the value of K d of the two elements. K d value determination is applied to the process of separation rhenium-188 from target of tungsten-188 for the purposes purification of radioisotopes that are made to meet the radionuclide and radiochemical purity. The K d value determination using solvent extraction with methyl ethyl ketone (MEK). Prior to the determination of K d values, determined beforehand the optimum conditions of extraction process based on the effect of agitation time, the volume of MEK, and the pH of the solution. Confirmation the results of the extraction was conducted using UV-Vis spectrophotometer with a complexing KSCN under acidic conditions and reductant SnCl 2 . The results showed that the optimum condition extraction process to feed each of 10 ppm is when the agitation for 10 minutes, the volume of MEK in 20 ml, and the pH below 5. Obtained the maximum recovery of rhenium are drawn to the organic phase as much as 9.545 ppm. However, the condition of the extraction process does not affect the migration of tungsten to the organic phase. Then the maximum K d values obtained at 2.7566 rhenium and tungsten maximum K d is 0.0873. Optimum conditions of extraction process can be further tested on radioactive rhenium and tungsten as an alternative to the separation of radioisotopes. (author)

Full Text Available There are a number of methods for observing and estimating the transverse dispersion coefficient in an analysis of the solute transport in open channel flow. It may be difficult to select an optimal method to calculate dispersion coefficients from tracer data among numerous methodologies. A flowchart was proposed in this study to select an appropriate method under the transport situation of either time-variant or steady condition. When making the flowchart, the strengths and limitations of the methods were evaluated based on its derivation procedure which was conducted under specific assumptions. Additionally, application examples of these methods on experimental data were illustrated using previous works. Furthermore, the observed dispersion coefficients in a laboratory channel were validated by using transport numerical modeling, and the simulation results were compared with the experimental results from tracer tests. This flowchart may assist in choosing the better methods for determining the transverse dispersion coefficient in various river mixing situations.

Full Text Available The applications of conventional culture-dependent assays to quantify bacteria populations are limited by their dependence on the inconsistent success of the different culture-steps involved. In addition, some bacteria can be pathogenic or a source of endotoxins and pose a health risk to the researchers. Bacterial quantification based on the real-time PCR method can overcome the above-mentioned problems. However, the quantification of bacteria using this approach is commonly expressed as absolute quantities even though the composition of samples (like those of digesta can vary widely; thus, the final results may be affected if the samples are not properly homogenized, especially when multiple samples are to be pooled together before DNA extraction. The objective of this study was to determine the correlation coefficients between four different methods of expressing the output data of real-time PCR-based bacterial quantification. The four methods were: (i the common absolute method expressed as the cell number of specific bacteria per gram of digesta; (ii the Livak and Schmittgen, ΔΔCt method; (iii the Pfaffl equation; and (iv a simple relative method based on the ratio of cell number of specific bacteria to the total bacterial cells. Because of the effect on total bacteria population in the results obtained using ΔCt-based methods (ΔΔCt and Pfaffl, these methods lack the acceptable consistency to be used as valid and reliable methods in real-time PCR-based bacterial quantification studies. On the other hand, because of the variable compositions of digesta samples, a simple ratio of cell number of specific bacteria to the corresponding total bacterial cells of the same sample can be a more accurate method to quantify the population.

Mathematical aspects of finite elementmethods are surveyed for incompressible viscous flows, concentrating on the steady primitive variable formulation. The discretization of a weak formulation of the Navier-Stokes equations are addressed, then the stability condition is considered, the satisfaction of which insures the stability of the approximation. Specific choices of finite element spaces for the velocity and pressure are then discussed. Finally, the connection between different weak formulations and a variety of boundary conditions is explored.

The method is aimed at detecting fuel element leaks during reactor operation. It is based on neutron flux measurements at many points in the core, using at least two detectors at a time. The detectors must be arranged in the direction of the coolant flow. Values obtained from periodic measurements are compared with threshold values. The location of fuel element leaks is determined from those values exceeding the threshold of individual detectors

Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

In this paper, after some introductory remarks on numerical methods for the integration of initial value problems, the applicability of the finite elementmethod for transient diffusion analysis as well as dynamic and inelastic analysis is discussed, and some examples are presented. (RW) [de

In this presentation a vision is given on tlie fiiture of the finite elementmethod (FEM) for geotechnical engineering and design. In the past 20 years the FEM has proven to be a powerful method for estimating deformation, stability and groundwater flow in geoteclmical stmctures. Much has been

In this paper a suryey is given of the important steps in the development of discontinuous Galerkin finite elementmethods for hyperbolic partial differential equations. Special attention is paid to the application of the discontinuous Galerkin method to the solution of the Euler equations of gas

For ten various brands of flour contents of chosen (heavy) elements were determined by means of ICP, GF-AAS, PIXE and ASV/CSV methods. General performance of participating laboratories as well as pros and cons of different analytical methods were compared and discussed. (author). 6 refs, 6 figs, 7 tabs

Three month old Thunbergia alata were exposed for 13 days to 10 μM selenite to determine the biotransformation of selenite in their roots. Selenium in formic acid extracts (80 ± 3%) was present as selenopeptides with Se-S bonds and selenium-PC complexes (selenocysteinyl-2-3-dihydroxypropionyl-glutathione, seleno-phytochelatin2, seleno-di-glutathione). An analytical method using HPLC-ICPMS to detect and quantify elemental selenium in roots of T. alata plants using sodium sulfite to quantitatively transform elemental selenium to selenosulfate was also developed. Elemental selenium was determined as 18 ± 4% of the total selenium in the roots which was equivalent to the selenium not extracted using formic acid extraction. The results are in an agreement with the XAS measurements of the exposed roots which showed no occurrence of selenite or selenate but a mixture of selenocysteine and elemental selenium.

The dielectric property of dispersive media is written as rational polynomial function, the relation between D and E is derived in time domain. It is named shift operator FDTD (SO-FDTD) method. The high accuracy and efficiency of this method is confirmed by computing the reflection coefficients of electromagnetic waves by a collisional plasma slab. The reflection coefficients between plasma and the atmosphere or vacuum can be calculated by using the SO-FDTD method. The result is that the reflection coefficients are affected by plasma thickness, electron numerical density, the distributing orderliness of electron density, and incidence wave frequency. (authors)

This is a study of an analysis of trace elements in medium thick solid samples, by the modified transmission emission method, using the energy dispersion X-ray fluorescence technique (EDXRF). The effects of absorption and reinforcement are the main disadvantages of the EDXRF technique for the quantitative analysis of bigger elements and trace elements in solid samples. The implementation of this method and its application to a variety of samples was carried out using an infinitely thick multi-element white sample that calculates the correction factors by absorbing all the analytes in the sample. The discontinuities in the masic absorption coefficients versus energies association for each element, with medium thick and homogenous samples, are analyzed and corrected. A thorough analysis of the different theoretical and test variables are proven by using real samples, including certified material with known concentration. The simplicity of the calculation method and the results obtained show the method's major precision, with possibilities for the non-destructive routine analysis of different solid samples, using the EDXRF technique (author)

Full Text Available This paper is devoted to formulation and general principles of approximation of multipoint boundary problem of static analysis of deep beam with the use of combined application of finite elementmethod (FEM discrete-continual finite elementmethod (DCFEM. The field of application of DCFEM comprises structures with regular physical and geometrical parameters in some dimension (“basic” dimension. DCFEM presupposes finite element approximation for non-basic dimension while in the basic dimension problem remains continual. DCFEM is based on analytical solutions of resulting multipoint boundary problems for systems of ordinary differential equations with piecewise-constant coefficients.

In this paper, we propose an analytical method for estimating the thermal expansion coefficient (TEC) of metals at high-temperature ranges. Although the conventional method based on quasiharmonic approximation (QHA) shows good results at low temperatures, anharmonic effects caused by large-amplitude thermal vibrations reduces its accuracy at high temperatures. Molecular dynamics (MD) naturally includes the anharmonic effect. However, since the computational cost of MD is relatively high, in order to make an interatomic potential capable of reproducing TEC, an analytical method is essential. In our method, analytical formulation of the radial distribution function (RDF) at finite temperature realizes the estimation of the TEC. Each peak of the RDF is approximated by the Gaussian distribution. The average and variance of the Gaussian distribution are formulated by decomposing the fluctuation of interatomic distance into independent elastic waves. We incorporated two significant anharmonic effects into the method. One is the increase in the averaged interatomic distance caused by large amplitude vibration. The second is the variation in the frequency of elastic waves. As a result, the TECs of fcc and bcc crystals estimated by our method show good agreement with those of MD. Our method enables us to make an interatomic potential that reproduces the TEC at high temperature. We developed the GEAM potential for nickel. The TEC of the fitted potential showed good agreement with experimental data from room temperature to 1000 K. As compared with the original potential, it was found that the third derivative of the wide-range curve was modified, while the zeroth, first and second derivatives were unchanged. This result supports the conventional theory of solid state physics. We believe our analytical method and developed interatomic potential will contribute to future high-temperature material development. (paper)

Full Text Available The joint roughness coefficient (JRC of rock joints has the characteristic of scale effect. JRC measured on small-size exposed rock joints should be evaluated by JRC scale effect in order to obtain the JRC of actual-scale rock joints, since field rock joints are hardly fully exposed or well saved. Based on the validity analysis of JRC scale effect, concepts of rate of JRC scale effect and effective length of JRC scale effect were proposed. Then, a graphic method for determination of the effective length of JRC scale effect was established. Study results show that the JRC of actual-scale rock joints can be obtained through a fractal model of JRC scale effect according to the statistically measured results of the JRC of small-size partial exposed rock joints and by the selection of fractal dimension of JRC scale effect and the determination of effective length of JRC scale effect.

The static coefficient of friction (µ static ) plays an important role in dexterous object manipulation. Minimal normal force (i.e., grip force) needed to avoid dropping an object is determined by the tangential force at the fingertip-object contact and the frictional properties of the skin-object contact. Although frequently assumed to be constant for all levels of normal force (NF, the force normal to the contact), µ static actually varies nonlinearly with NF and increases at low NF levels. No method is currently available to measure the relationship between µ static and NF easily. Therefore, we propose a new method allowing the simple and reliable measurement of the fingertip µ static at different NF levels, as well as an algorithm for determining µ static from measured forces and torques. Our method is based on active, back-and-forth movements of a subject's finger on the surface of a fixed six-axis force and torque sensor. µ static is computed as the ratio of the tangential to the normal force at slip onset. A negative power law captures the relationship between µ static and NF. Our method allows the continuous estimation of µ static as a function of NF during dexterous manipulation, based on the relationship between µ static and NF measured before manipulation.

The method which is proposed is based on a finite element modelization for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method is presented, then applications are given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior are described

We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

The Finite ElementMethod (FEM) is employed for the numerical solution of fluid flow problems with combined heat transfer mechanisms. Boussinesq approximations are used for the solution of the governing equations. The application of the FEM leads to a set of simultaneous nonlinear equations. The development of the method, for the solution of bidimensional and axisymmetric problems, is presented. Examples of fluid flow in pipes, including natural and forced convection, are solved with the proposed method and discussed in the paper. (Author) [pt

The method which is proposed is based on a finite element modelisation for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method will be presented, then applications will be given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior will be described

Full text: Various nuclear analytical methods have been developed and applied to determine the elemental composition of calcified tissues (teeth and bones). Fluorine was determined by prompt gamma activation analysis through the 19 F(p,ag) 16 O reaction. Carbon was measured by activation analysis with He-3 ions, and the technique of Proton-Induced X-ray Emission (PIXE) was applied to simultaneously determine Ca, P, and trace elements in well-documented teeth. Dental hard tissues, enamel, dentine, cement, and their junctions, as well as different parts of the same tissue, were examined separately. Furthermore, using a Proton Microprobe, we measured the surface distribution of F and other elements on and around carious lesions on the enamel. The depth profiles of F, and other elements, were also measured right up to the amelodentin junction

A new method for estimating the diffuse attenuation coefficient for photosynthetically active radiation (KdPAR) from paired temperature sensors was derived. We show that during cases where the attenuation of penetrating shortwave solar radiation is the dominant source of temperature changes, time series measurements of water temperatures at multiple depths (z1 and z2) are related to one another by a linear scaling factor (a). KdPAR can then be estimated by the simple equation KdPAR ln(a)/(z2/z1). A suggested workflow is presented that outlines procedures for calculating KdPAR according to this paired temperature sensor (PTS) method. This method is best suited for conditions when radiative temperature gains are large relative to physical noise. These conditions occur frequently on water bodies with low wind and/or high KdPARs but can be used for other types of lakes during time periods of low wind and/or where spatially redundant measurements of temperatures are available. The optimal vertical placement of temperature sensors according to a priori knowledge of KdPAR is also described. This information can be used to inform the design of future sensor deployments using the PTS method or for campaigns where characterizing sub-daily changes in temperatures is important. The PTS method provides a novel method to characterize light attenuation in aquatic ecosystems without expensive radiometric equipment or the user subjectivity inherent in Secchi depth measurements. This method also can enable the estimation of KdPAR at higher frequencies than many manual monitoring programs allow.

This work deals with the CERMET fuels, chosen for their good behaviour under irradiation and their high thermal conductivity. The kinetic coefficients have been particularly studied. Comparisons have been made with other solutions using other composite fuels in particular the solid solutions and the ROX solution. The core control requiring an heterogeneous assembly, we propose an assembly whose characteristics are compared with those of the APA reference. (O.M.)

Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary ElementMethods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.

Although the Trefftz finite elementmethod (FEM) has become a powerful computational tool in the analysis of plane elasticity, thin and thick plate bending, Poisson's equation, heat conduction, and piezoelectric materials, there are few books that offer a comprehensive computer programming treatment of the subject. Collecting results scattered in the literature, MATLAB® and C Programming for Trefftz Finite ElementMethods provides the detailed MATLAB® and C programming processes in applications of the Trefftz FEM to potential and elastic problems. The book begins with an introduction to th

We calculate the logarithmic contributions to the massive Wilson coefficients for deep-inelastic scattering in the asymptotic region Q{sup 2} >> m{sup 2} to 3-loop order in the fixed flavor number scheme and present the corresponding expressions for the massive operator matrix elements needed in the variable flavor number scheme. Explicit expressions are given in Mellin N-space. (orig.)

The previously developed numerical inverse method was applied to determine the composition-dependent interdiffusion coefficients in single-phase finite diffusion couples. The numerical inverse method was first validated in a fictitious binary finite diffusion couple by pre-assuming four standard...... sets of interdiffusion coefficients. After that, the numerical inverse method was then adopted in a ternary Al-Cu-Ni finite diffusion couple. Based on the measured composition profiles, the ternary interdiffusion coefficients along the entire diffusion path of the target ternary diffusion couple were...... obtained by using the numerical inverse approach. The comprehensive comparisons between the computations and the experiments indicate that the numerical inverse method is also applicable to high-throughput determination of the composition-dependent interdiffusion coefficients in finite diffusion couples....

Full Text Available In Modern tools as Finite ElementMethod can be used to study the behavior of elastomeric isolation systems. The simulation results obtained in this way provide a large series of data about the behavior of elastomeric isolation bearings under different types of loads and help in taking right decisions regarding geometrical optimizations needed for improve such kind of devices.

Full Text Available As is well known, the formulation of a multipoint boundary problem involves three main components: a description of the domain occupied by the structure and the corresponding subdomains; description of the conditions inside the domain and inside the corresponding subdomains, the description of the conditions on the boundary of the domain, conditions on the boundaries between subdomains. This paper is a continuation of another work published earlier, in which the formulation and general principles of the approximation of the multipoint boundary problem of a static analysis of deep beam on the basis of the joint application of the finite elementmethod and the discrete-continual finite elementmethod were considered. It should be noted that the approximation within the fragments of a domain that have regular physical-geometric parameters along one of the directions is expedient to be carried out on the basis of the discrete-continual finite elementmethod (DCFEM, and for the approximation of all other fragments it is necessary to use the standard finite elementmethod (FEM. In the present publication, the formulas for the computing of displacements partial derivatives of displacements, strains and stresses within the finite element model (both within the finite element and the corresponding nodal values (with the use of averaging are presented. Boundary conditions between subdomains (respectively, discrete models and discrete-continual models and typical conditions such as “hinged support”, “free edge”, “perfect contact” (twelve basic (basic variants are available are under consideration as well. Governing formulas for computing of elements of the corresponding matrices of coefficients and vectors of the right-hand sides are given for each variant. All formulas are fully adapted for algorithmic implementation.

The piezoelectric coefficients (d 33 , -d 31 , d 15 , g 33 , -g 31 , g 15 ) of soft and hard lead zirconate titanate ceramics were measured by the quasi-static and resonance methods, at temperatures from 20 to 300 0 C. The results showed that the piezoelectric coefficients d 33 , -d 31 and d 15 obtained by these two methods increased with increasing temperature for both hard and soft PZT ceramics, while the piezoelectric coefficients g 33 , -g 31 and g 15 decreased with increasing temperature for both hard and soft PZT ceramics. In this paper, the observed results were also discussed in terms of intrinsic and extrinsic contributions to piezoelectric response.

Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.

Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.

The particle finite elementmethod (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite elementmethod for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

Full Text Available The article is devoted to the problem of quality assurance of medicinal products, namely the determination of elemental impurity concentration compared to permitted daily exposures for and the correct choice analytical methods that are adequate to the formulated tasks. The paper goal is to compare characteristics of four analytical methods recommended by the Pharmacopoeia of various countries to control the content of elemental impurities in medicines, including medicinal plant raw materials and herbal medicines. Both advantages and disadvantages were described for atomic absorption spectroscopy with various atomising techniques, as well as atomic emission spectroscopy and mass spectrometry with inductively coupled plasma. The choice of the most rational analysis method depends on a research task and is reasoned from the viewpoint of analytical objectives, possible complications, performance attributes, and economic considerations. The methods of ICP-MS and GFAAS were shown to provide the greatest potential for determining the low and ultra-low concentrations of chemical elements in medicinal plants and herbal medicinal products. The other two methods, FAAS and ICP-AES, are limited to the analysis of the main essential elements and the largest impurities. The ICP-MS is the most efficient method for determining ultra-low concentrations. However, the interference of mass peaks is typical for ICP-MS. It is formed not only by impurities but also by polyatomic ions with the participation of argon, as well as atoms of gases from the air (C, N and O or matrices (O, N, H, P, S and Cl. Therefore, a correct sample preparation, which guarantees minimisation of impurity contamination and loss of analytes becomes the most crucial stage of analytical applications of ICP-MS. The detections limits for some chemical elements, which content is regulated in modern Pharmacopoeia, were estimated for each method and analysis conditions of medicinal plant raw

This book presents theories and the main useful techniques of the Finite ElementMethod (FEM), with an introduction to FEM and many case studies of its use in engineering practice. It supports engineers and students to solve primarily linear problems in mechanical engineering, with a main focus on static and dynamic structural problems. Readers of this text are encouraged to discover the proper relationship between theory and practice, within the finite elementmethod: Practice without theory is blind, but theory without practice is sterile. Beginning with elasticity basic concepts and the classical theories of stressed materials, the work goes on to apply the relationship between forces, displacements, stresses and strains on the process of modeling, simulating and designing engineered technical systems. Chapters discuss the finite element equations for static, eigenvalue analysis, as well as transient analyses. Students and practitioners using commercial FEM software will find this book very helpful. It us...

Batik is one of traditional arts that has been established by the UNESCO as Indonesia’s cultural heritage. Batik has varieties and motifs, and each motifs has its own uniqueness but seems similar, that makes it difficult to identify. This study aims to develop an application that can identify typical batik Bali with etnomatematics elements on it. Etnomatematics is a study that shows relation between culture and mathematics concepts. Etnomatematics in Batik Bali is more to geometrical concept in line of strong Balinese culture element. The identification process is use backpropagation method. Steps of backpropagation methods are image processing (including scalling and tresholding image process). Next step is insert the processed image to an artificial neural network. This study resulted an accuracy of identification of batik Bali that has Etnomatematics elements on it.

The Applied ElementMethod (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite ElementMethod (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.

1 - Nature of physical problem solved: Approximates one- and two- dimensional functions using different forms of the approximating function, as polynomials, rational functions, Splines and (or) the finite elementmethod. Different kinds of transformations of the dependent and (or) the independent variables can easily be made by data cards using a FORTRAN-like language. 2 - Method of solution: Approximations by polynomials, Splines and (or) the finite elementmethod are made in L2 norm using the least square method by which the answer is directly given. For rational functions in one dimension the result given in L(infinite) norm is achieved by iterations moving the zero points of the error curve. For rational functions in two dimensions, the norm is L2 and the result is achieved by iteratively changing the coefficients of the denominator and then solving the coefficients of the numerator by the least square method. The transformation of the dependent and (or) independent variables is made by compiling the given transform data card(s) to an array of integers from which the transformation can be made

Full Text Available A new Hermitian Mindlin plate wavelet element is proposed. The two-dimensional Hermitian cubic spline interpolation wavelet is substituted into finite element functions to construct frequency response function (FRF. It uses a system’s FRF and response spectrums to calculate load spectrums and then derives loads in the time domain via the inverse fast Fourier transform. By simulating different excitation cases, Hermitian cubic spline wavelets on the interval (HCSWI finite elements are used to reverse load identification in the Mindlin plate. The singular value decomposition (SVD method is adopted to solve the ill-posed inverse problem. Compared with ANSYS results, HCSWI Mindlin plate element can accurately identify the applied load. Numerical results show that the algorithm of HCSWI Mindlin plate element is effective. The accuracy of HCSWI can be verified by comparing the FRF of HCSWI and ANSYS elements with the experiment data. The experiment proves that the load identification of HCSWI Mindlin plate is effective and precise by using the FRF and response spectrums to calculate the loads.

With the Discrete ElementMethod it is possible to model materials that consists of individual particles where a particle may role or slide on other particles. This is interesting because most of the deformation in granular materials is due to rolling or sliding rather that compression of the gra...

The paper describes the modification of piezoelectric accelerometers using a Finite Element (FE) method. Brüel & Kjær Accelerometer Type 8325 is chosen as an example to illustrate the advanced accelerometer development procedure. The deviation between the measurement and FE simulation results...

A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite elementmethods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown

The work investigates the possibilities offered by the particle finite elementmethod (PFEM) in the simulation of forming problems involving large deformations, multiple contacts, and new boundaries generation. The description of the most distinguishing aspects of the PFEM, and its application to simulation of representative forming processes, illustrate the proposed methodology.

The fabrication of block fuel elements for gas-cooled high temperature reactors can be improved upon by adding 0.2 to 2 wt.% of a hydrocarbon compound to the lubricating mixture prior to pressing. Hexanol or octanol are named as substances. The dimensional accuracy of the block is thus improved. 2 examples illustrate the method. (RW) [de

In [6,7,13,14] h-p spectral elementmethods for solving elliptic boundary value problems on polygonal ... Let M denote the number of corner layers and W denote the number of degrees of .... β is given by Theorem 2.2 of [3] which can be stated.

A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite elementmethods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown

A computer code HERTPIA was developed for the calculation of electromagnetic wake fields excited by charged particles travelling through arbitrarily shaped accelerating cavities. This code solves transient wave problems for a Hertz vector. The numerical analysis is based on the boundary elementmethod. This program is validated by comparing its results with analytical solutions in a pill-box cavity

We investigate the influence of the value of deflation vectors at interfaces on the rate of convergence of preconditioned conjugate gradient methods applied to a Finite Element discretization for an elliptic equation. Our set-up is a Poisson problem in two dimensions with continuous or discontinuous

The fabrication of block fuel elements for gas-cooled high temperature reactors can be improved upon by adding 0.2 to 2 wt.% of a hydrocarbon compound to the lubricating mixture prior to pressing. Hexanol or octanol are named as substances. The dimensional accuracy of the block is thus improved. 2 examples illustrate the method. (orig./PW)

Full Text Available Topology optimization provides great convenience to designers during the designing stage in many industrial applications. With this method, designers can obtain a rough model of any part at the beginning of a designing stage by defining loading and boundary conditions. At the same time the optimization can be used for the modification of a product which is being used. Lengthy solution time is a disadvantage of this method. Therefore, the method cannot be widespread. In order to eliminate this disadvantage, an element removal algorithm has been developed for topology optimization. In this study, the element removal algorithm is applied on 3-dimensional parts, and the results are compared with the ones available in the related literature. In addition, the effects of the method on solution times are investigated.

Full Text Available The specific requirements that appear in addressable fire detection and alarm systems and the shortcomings of the existing addressing methods were discussed. A new method of addressing of detectors was proposed. The basic principles of addressing and responding of a called element are stated. Extinguishing module is specific subsystem in classic fire detection and alarm systems. Appearing of addressable fire detection and alarm systems didn't caused essential change in the concept of extinguishing module because of long calling period of such systems. Addressable fire security system based on counting addressing method reaches high calling rates and enables integrating of the extinguishing module in addressable system. Solutions for command addressable element and integrated extinguishing module are given in this paper. The counting addressing method was developed for specific requirements in fire detection and alarm systems, yet its speed and reliability justifies its use in the acquisition of data on slowly variable parameters under industrial conditions. .

AbstractBackgroundThe localization of proteins to specific subcellular structures in eukaryotic cells provides important information with respect to their function. Fluorescence microscopy approaches to determine localization distribution have proved to be an essential tool in the characterization of unknown proteins, and are now particularly pertinent as a result of the wide availability of fluorescently-tagged constructs and antibodies. However, there are currently very few image analysis options able to effectively discriminate proteins with apparently similar distributions in cells, despite this information being important for protein characterization.FindingsWe have developed a novel method for combining two existing image analysis approaches, which results in highly efficient and accurate discrimination of proteins with seemingly similar distributions. We have combined image texture-based analysis with quantitative co-localization coefficients, a method that has traditionally only been used to study the spatial overlap between two populations of molecules. Here we describe and present a novel application for quantitative co-localization, as applied to the study of Rab family small GTP binding proteins localizing to the endomembrane system of cultured cells.ConclusionsWe show how quantitative co-localization can be used alongside texture feature analysis, resulting in improved clustering of microscopy images. The use of co-localization as an additional clustering parameter is non-biased and highly applicable to high-throughput image data sets.

The most important primary interaction cross section of gamma radiation which is of interest in radiation dosimetry and health physics is the energy absorption coefficient μ en of the medium under study. Direct measurement of μ en is, however, difficult and recourse is t aken to theoretical computations for its estimation. In this study a new, simple and direct method for the determination of μ en is reported. The method is based on paraxial sphere transmission using a proportional-response gamma detector. The bremsstrahlung originating from photoelectrons in the absorbing medium and fluorescence radiations from shielding etc. have been suppressed by using suitable filters. The effects of nonparaxiality of finite sample thickness have been accounted for, using extrapolation procedures. The deviation from nonproportionality and other corrections have been shown to be small. The measured value of μ en for paraffin has been determined as (3.3+-0.2)x10 -3 m 2 /Kg. This compares favourably with the theoretically computed value of 3.35 x 10 -3 m 2 /Kg given by Hubbell et al [pt

Clothes play an important role in dermal exposure to indoor semivolatile organic compounds (SVOCs). The diffusion coefficient of SVOCs in clothing material (D m ) is essential for estimating SVOC sorption by clothing material and subsequent dermal exposure to SVOCs. However, few studies have reported the measured D m for clothing materials. In this paper, we present the solid-phase microextraction (SPME) based C a -history method. To the best of our knowledge, this is the first try to measure D m with known relative standard deviation (RSD). A thin sealed chamber is formed by a circular ring and two pieces of flat SVOC source materials that are tightly covered by the targeted clothing materials. D m is obtained by applying an SVOC mass transfer model in the chamber to the history of gas-phase SVOC concentrations (C a ) in the chamber measured by SPME. D m 's of three SVOCs, di-iso-butyl phthalate (DiBP), di-n-butyl phthalate (DnBP), and tris(1-chloro-2-propyl) phosphate (TCPP), in a cotton T-shirt can be obtained within 16 days, with RSD less than 3%. This study should prove useful for measuring SVOC D m in various sink materials. Further studies are expected to facilitate application of this method and investigate the effects of temperature, relative humidity, and clothing material on D m .

The goal of this study is to develop a practical and fast simulation tool for soil-tire interaction analysis, where finite elementmethod (FEM) and discrete elementmethod (DEM) are coupled together, and which can be realized on a desktop PC. We have extended our formerly proposed dynamic FE-DE method (FE-DEM) to include practical soil-tire system interaction, where not only the vertical sinkage of a tire, but also the travel of a driven tire was considered. Numerical simulation by FE-DEM is stable, and the relationships between variables, such as load-sinkage and sinkage-travel distance, and the gross tractive effort and running resistance characteristics, are obtained. Moreover, the simulation result is accurate enough to predict the maximum drawbar pull for a given tire, once the appropriate parameter values are provided. Therefore, the developed FE-DEM program can be applied with sufficient accuracy to interaction problems in soil-tire systems.

Tunnels buried deep within the earth constitute an important class geomechanics problems. Two numerical techniques used for the analysis of geomechanics problems, the finite elementmethod and the boundary elementmethod, have complementary characteristics for applications to problems of this type. The usefulness of combining these two methods for use as a geomechanics analysis tool has been recognized for some time, and a number of coupling techniques have been proposed. However, not all of them lend themselves to efficient computational implementations for large-scale problems. This report examines a coupling technique that can form the basis for an efficient analysis tool for large scale geomechanics problems through the use of an iterative equation solver

The authors calculate the distribution coefficient {Gamma}{sub i} between the liquid and solid phases of an element i in the presence of other elements j in a solvent M ({Gamma}{sub i} = x'{sub i}/x{sub i}, where x'{sub i} and x{sub i} are the atomic fractions of i in the solid and liquid phases respectively) from the thermodynamic properties of binary systems of the type (i, M), (j, M) and (i, j). They show that the interaction of all the elements present may, under certain conditions, strongly affect the value of the coefficient {Gamma}{sub i}. This effect is pronounced if the following condition is fulfilled: {gamma}{sup {infinity}}{sub i(M)}, {gamma}{sup {infinity}}{sub j(M)} > {gamma}{sup {infinity}}{sub ij} where {gamma}{sup {infinity}}{sub i(M)}, {gamma}{sup {infinity}}{sub j(M)} and {gamma}{sup {infinity}}{sub ij} are limiting activity coefficients of the constituents i and j in the (i, M) (j, M) and (i, j) liquid state systems. It is a simple matter to deduce from this condition an application to the purification of metals by the zone-melting method; the condition enables one to choose an element j which is added deliberately to a metal in order to facilitate the elimination-of an element i (subsequent elimination of the element j being also, of course, a simple matter). For example, the authors were able to confirm that the addition of aluminium to beryllium enables one to improve the elimination of iron during the purification of the beryllium by the zone-melting technique, the aluminium acting as a carrier. (author) [French] Les auteurs calculent le coefficient de distribution Greek-Capital-Letter-Gamma {sub i} entre phases liquide et solide d'un element i en presence d'autres elements j dans un solvant M ( Greek-Capital-Letter-Gamma {sub i} = x'{sub i}/x{sub i}, x'{sub i} et x{sub i} representant respectivement les fractions atomiques de i dans les phases solide et liquide), a partir des proprietes thermodynamiques des systemes binaires de type: (i, M

The ELEFIB Fortran language computer code using finite elementmethod for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt

Crustal faults and sharp material transitions in the crust are usually represented as triangulated surfaces in structural geological models. The complex range of volumes separating such surfaces is typically three-dimensionally meshed in order to solve equations that describe crustal deformation with the finite-difference (FD) or finite-element (FEM) methods. We show here how the Boundary ElementMethod, combined with the Multipole approach, can revolutionise the calculation of stress and strain, solving the problem of computational scalability from reservoir to basin scales. The Fast Multipole Boundary ElementMethod (Fast BEM) tackles the difficulty of handling the intricate volume meshes and high resolution of crustal data that has put classical Finite 3D approaches in a performance crisis. The two main performance enhancements of this method: the reduction of required mesh elements from cubic to quadratic with linear size and linear-logarithmic runtime; achieve a reduction of memory and runtime requirements allowing the treatment of a new scale of geodynamic models. This approach was recently tested and applied in a series of papers by [1, 2, 3] for regional and global geodynamics, using KD trees for fast identification of near and far-field interacting elements, and MPI parallelised code on distributed memory architectures, and is now in active development for crustal dynamics. As the method is based on a free-surface, it allows easy data transfer to geological visualisation tools where only changes in boundaries and material properties are required as input parameters. In addition, easy volume mesh sampling of physical quantities enables direct integration with existing FD/FEM code.

A method of numerically estimating dynamic Green's functions using the finite elementmethod is proposed. These Green's functions are accurate in a limited frequency range dependent on the mesh size used to generate them. This range can often match or exceed the frequency sensitivity of the traditional acoustic emission sensors. An algorithm is also developed to characterize an acoustic emission source by obtaining information about its strength and temporal dependence. This information can then be used to reproduce the source in a finite element model for further analysis. Numerical examples are presented that demonstrate the ability of the band-limited Green's functions approach to determine the moment tensor coefficients of several reference signals to within seven percent, as well as accurately reproduce the source-time function.

Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite elementmethod. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented using a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.

Document available in extended abstract form only. The main mechanisms by which gas will be generated in deep geological repositories are: anaerobic corrosion of metals in wastes and packaging; radiolysis of water and organic materials in the packages, and microbial degradation of various organic wastes. Corrosion and radiolysis yield mainly hydrogen while microbial degradation leads to methane and carbon dioxide. The gas generated in the near field of a geological repository in clay will dissolve in the ground water and be transported away from the repository by diffusion as dissolved species. However if the gas generation rate is larger than the diffusive flux, the pore water will get over-saturated and a free gas phase will be formed. This will lead to a gas pressure build-up and finally to an advective gas flux. The latter might influence the performance of the repository. Therefore it is important to assess whether or not gas production rates can exceed the capacity of the near field to store and dissipate these gases by dissolution and diffusion only. The current available gas diffusion parameters for hydrogen in Boom Clay, obtained from the MEGAS project, suffer from an uncertainty of 1 to 2 orders of magnitude. Sensitivity calculations performed by Weetjens et al. (2006) for the disposal of vitrified high-level waste showed that with this uncertainty on the diffusion coefficient, the formation of a free gas phase cannot be excluded. Furthermore, recent re-evaluations of the MEGAS experiments by Krooss (2008) and Aertsens (2008) showed that the applied technique does not allow precise determination of the diffusion coefficient. Therefore a new method was developed to determine more precisely the gas diffusion coefficient for dissolved gases (especially dissolved hydrogen) in Boom Clay. This should allow for a more realistic assessment of the gas flux evolution of a repository as function of the estimated gas generation rates. The basic idea is to perform a

Highlights: • Extended finite elementmethod used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite elementmethod capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.

Highlights: • Extended finite elementmethod used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite elementmethod capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.

The applicability of ion microprobe (IMP) for quantitative analysis of minor elements (Sr, Y, Zr, La, Sm, and Yb) in the major phases present in natural Ca-, Al-rich inclusions (CAIs) was investigated by comparing IMP results with those of an electron microprobe (EMP). Results on three trace-element-doped glasses indicated that it is not possible to obtain precise quantitative analysis by using IMP if there are large differences in SiO2 content between the standards used to derive the ion yields and the unknowns.

An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.

A boundary elementmethod (BEM) is obtained for solving a boundary value problem of homogeneous anisotropic media governed by diffusion-convection equation. The application of the BEM is shown for two particular pollutant transport problems of Tello river and Unhas lake in Makassar Indonesia. For the two particular problems a variety of the coefficients of diffusion and the velocity components are taken. The results show that the solutions vary as the parameters change. And this suggests that one has to be careful in measuring or determining the values of the parameters.

Proton recoil spectra were calculated for various spherical proportional counters using Monte Carlo simulation combined with the finite elementmethod. Electric field lines and strength were calculated by defining an appropriate mesh and solving the Laplace equation with the associated boundary conditions, taking into account the geometry of every counter. Thus, different regions were defined in the counter with various coefficients for the energy deposition in the Monte Carlo transport code MCNPX. Results from the calculations are in good agreement with measurements for three different gas pressures at various neutron energies.

Full Text Available This article deals with the experimental determination of heat transfer coefficients. The calculation of heat transfer coefficients constitutes a crucial issue in design and sizing of heat exchangers. The Wilson plot method and its modifications based on measured experimental data utilization provide an appropriate tool for the analysis of convection heat transfer processes and the determination of convection coefficients in complex cases. A modification of the Wilson plot method for shell-and-tube condensers is proposed. The original Wilson plot method considers a constant value of thermal resistance on the condensation side. The heat transfer coefficient on the cooling side is determined based on the change in thermal resistance for different conditions (fluid velocity and temperature. The modification is based on the validation of the Nusselt theory for calculating the heat transfer coefficient on the condensation side. A change of thermal resistance on the condensation side is expected and the value is part of the calculation. It is possible to improve the determination accuracy of the criterion equation for calculation of the heat transfer coefficient using the proposed modification. The criterion equation proposed by this modification for the tested shell-and-tube condenser achieves good agreement with the experimental results and also with commonly used theoretical methods.

The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite elementmethod (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

In this paper, the trial function method is extended to study the generalized nonlinear Schrödinger equation with time-dependent coefficients. On the basis of a generalized traveling wave transformation and a trial function, we investigate the exact envelope traveling wave solutions of the generalized nonlinear Schrödinger equation with time-dependent coefficients. Taking advantage of solutions to trial function, we successfully obtain exact solutions for the generalized nonlinear Schrödinger equation with time-dependent coefficients under constraint conditions. (general)

A method for modeling the discrete fracture of two-dimensional linear elastic structures with a distribution of small cracks subject to dynamic conditions has been developed. The foundation for this numerical model is a plane element formulated from the Hu-Washizu energy principle. The distribution of small cracks is incorporated into the numerical model by including a small crack at each element interface. The additional strain field in an element adjacent to this crack is treated as an externally applied strain field in the Hu-Washizu energy principle. The resulting stiffness matrix is that of a standard plane element. The resulting load vector is that of a standard plane element with an additional term that includes the externally applied strain field. Except for the crack strain field equations, all terms of the stiffness matrix and load vector are integrated symbolically in Maple V so that fully integrated plane stress and plane strain elements are constructed. The crack strain field equations are integrated numerically. The modeling of dynamic behavior of simple structures was demonstrated within acceptable engineering accuracy. In the model of axial and transverse vibration of a beam and the breathing mode of vibration of a thin ring, the dynamic characteristics were shown to be within expected limits. The models dominated by tensile forces (the axially loaded beam and the pressurized ring) were within 0.5% of the theoretical values while the shear dominated model (the transversely loaded beam) is within 5% of the calculated theoretical value. The constant strain field of the tensile problems can be modeled exactly by the numerical model. The numerical results should therefore, be exact. The discrepancies can be accounted for by errors in the calculation of frequency from the numerical results. The linear strain field of the transverse model must be modeled by a series of constant strain elements. This is an approximation to the true strain field, so some

Finite cover method (FCM) is extended to elastoplasticity problems. The FCM, which was originally developed under the name of manifold method, has recently been recognized as one of the generalized versions of finite elementmethods (FEM). Since the mesh for the FCM can be regular and squared regardless of the geometry of structures to be analyzed, structural analysts are released from a burdensome task of generating meshes conforming to physical boundaries. Numerical experiments are carried out to assess the performance of the FCM with such discretization in elastoplasticity problems. Particularly to achieve this accurately, the so-called mortar elements are introduced to impose displacement boundary conditions on the essential boundaries, and displacement compatibility conditions on material interfaces of two-phase materials or on joint surfaces between mutually incompatible meshes. The validity of the mortar approximation is also demonstrated in the elastic-plastic FCM.

According to our previous study, it is confirmed that the Petrov-Galerkin Natural ElementMethod (PG-NEM) completely resolves the numerical integration inaccuracy in the conventional Bubnov-Galerkin Natural ElementMethod (BG-NEM). This paper is an extension of PG-NEM to two-dimensional nonlinear dynamic problem. For the analysis, a constant average acceleration method and a linearized total Lagrangian formulation is introduced with the PG-NEM. At every time step, the grid points are updated and the shape functions are reproduced from the relocated nodal distribution. This process enables the PG-NEM to provide more accurate and robust approximations. The representative numerical experiments performed by the test Fortran program, and the numerical results confirmed that the PG-NEM effectively and accurately approximates the nonlinear dynamic problem

To date, batch sorption and dynamic column experiments have been performed for many elements as part of site characterization programs. These experiments were often conducted with samples having relatively high liquid/solid ratios (in some cases the solid volume was much smaller than the solution volume). The development of methods for measuring sorption parameters at low liquid/solid ratios was undertaken to attempt to judge whether or not results of saturated experiments are valid for use in performance assessments of sites located in unsaturated rocks. The amount of hydrologic saturation can affect the ionic strength, pH, and redox potential which can in turn affect sorption. In addition, the presence of the gas phase may affect the amount of wetting occurring on the solid's surface. This paper describes experimental procedures which were developed to evaluate the sorption of uranium by silica sand at predetermined levels of unsaturation

A quench detection system of KSTAR Poloidal Field (PF) coils is inevitable for stable operation because normal zone generates overheating during quench occurrence. Recently, new voltage quench detection method, combination of Central Difference Averaging (CDA) and Mutual Inductance Compensation (MIK) for compensating mutual inductive voltage more effectively than conventional voltage detection method, has been suggested and studied. For better performance of mutual induction cancellation by adjacent coils of CDA+MIK method for KSTAR coil system, balance coefficients of CDA must be estimated and adjusted preferentially. In this paper, the balance coefficients of CDA for KSTAR PF coils were numerically estimated. The estimated result was adopted and tested by using simulation. The CDA method adopting balance coefficients effectively eliminated mutual inductive voltage, and also it is expected to improve performance of CDA+MIK method for quench detection of KSTAR PF coils

The areas of the habitat and bamboo forest, and the size of the giant panda wild population have greatly increased, while habitat fragmentation and local population isolation have also intensified in recent years. Accurate evaluation of ecosystem status of the panda in the giant panda distribution area is important for giant panda conservation. The ecosystems of the distribution area and six mountain ranges were subdivided into habitat and population subsystems based on the hie-rarchical system theory. Using the panda distribution area as the study area and the three national surveys as the time node, the evolution laws of ecosystems were studied using the entropy method, coefficient of variation, and correlation analysis. We found that with continuous improvement, some differences existed in the evolution and present situation of the ecosystems of six mountain ranges could be divided into three groups. Ecosystems classified into the same group showed many commonalities, and difference between the groups was considerable. Problems of habitat fragmentation and local population isolation became more serious, resulting in ecosystem degradation. Individuali-zed ecological protection measures should be formulated and implemented in accordance with the conditions in each mountain system to achieve the best results.

Full Text Available Sustainable development is widely accepted in the world. How to reflect the sustainable development capacity of a region is an important issue for enacting policies and plans. An index system for capacity assessment is established by employing the Entropy Weight Coefficientmethod. The results indicate that the sustainable development capacity of Shandong Province is improving in terms of its economy subsystem, resource subsystem, and society subsystem whilst degrading in its environment subsystem. Shandong Province has shown the general trend towards sustainable development. However, the sustainable development capacity can be constrained by the resources such as energy, land, water, as well as environmental protection. These issues are induced by the economy development model, the security of energy supply, the level of new energy development, the end-of-pipe control of pollution, and the level of science and technology commercialization. Efforts are required to accelerate the development of the tertiary industry, the commercialization of high technology, the development of new energy and renewable energy, and the structure optimization of energy mix. Long-term measures need to be established for the ecosystem and environment protection.

This paper describes a new approach for treating the energy variable of the neutron transport equation in the resolved resonance energy range. The aim is to avoid recourse to a case-specific spatially dependent self-shielding calculation when considering a broad group structure. This method consists of a discontinuous Galerkin discretization of the energy using wavelet-based elements. A Σ t -orthogonalization of the element basis is presented in order to make the approach tractable for spatially dependent problems. First numerical tests of this method are carried out in a limited framework under the Livolant-Jeanpierre hypotheses in an infinite homogeneous medium. They are mainly focused on the way to construct the wavelet-based element basis. Indeed, the prior selection of these wavelet functions by a thresholding strategy applied to the discrete wavelet transform of a given quantity is a key issue for the convergence rate of the method. The Canuto thresholding approach applied to an approximate flux is found to yield a nearly optimal convergence in many cases. In these tests, the capability of such a finite element discretization to represent the flux depression in a resonant region is demonstrated; a relative accuracy of 10 -3 on the flux (in L 2 -norm) is reached with less than 100 wavelet coefficients per group. (authors)

Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite elementmethods. We show that for complicated problems, the mixed multiscale finite elementmethods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii

Full Text Available In this article, we develop a fully discrete finite elementmethod for the nonlinear Schrodinger equation (NLS with time- and space-fractional derivatives. The time-fractional derivative is described in Caputo's sense and the space-fractional derivative in Riesz's sense. Its stability is well derived; the convergent estimate is discussed by an orthogonal operator. We also extend the method to the two-dimensional time-space-fractional NLS and to avoid the iterative solvers at each time step, a linearized scheme is further conducted. Several numerical examples are implemented finally, which confirm the theoretical results as well as illustrate the accuracy of our methods.

The analysis of the transient eddy current in the conductors by ''Finite Element Circuit Method'' is developed. This method can be easily applied to various geometrical shapes of thin conductors. The eddy currents on the vacuum vessel and the upper and lower support plates of JT-60 machine (which is now being constructed by Japan Atomic Energy Research Institute) are calculated by this method. The magnetic field induced by the eddy current is estimated in the domain occupied by the plasma. And the force exerted to the vacuum vessel is also estimated

The extraction of hadron form factors in lattice QCD using the standard two- and three-point correlator functions has its limitations. One of the most commonly studied sources of systematic error is excited state contamination, which occurs when correlators are contaminated with results from higher energy excitations. We apply the variational method to calculate the axial vector current g A and compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.

Powder metallurgy (PM) has been widely used in several industries; especially automotive and aerospace industries and powder metallurgy products grow up every year. The mechanical properties of the final product that is obtained by cold compaction and sintering in powder metallurgy are closely related to the final relative density of the process. The distribution of the relative density in the die is affected by parameters such as compaction velocity, friction coefficient and temperature. Moreover, most of the numerical studies utilizing finite element approaches treat the examined environment as a continuous media with uniformly homogeneous porosity whereas Multi-Particle Finite ElementMethod (MPFEM) treats every particles as an individual body. In MPFEM, each of the particles can be defined as an elastic- plastic deformable body, so the interactions of the particles with each other and the die wall can be investigated. In this study, each particle was modelled and analyzed as individual deformable body with 3D tetrahedral elements by using MPFEM approach. This study, therefore, was performed to investigate the effects of different temperatures and compaction velocities on stress distribution and deformations of copper powders of 200 µm-diameter in compaction process. Furthermore, 3-D MPFEM model utilized von Mises material model and constant coefficient of friction of μ=0.05. In addition to MPFEM approach, continuum modelling approach was also performed for comparison purposes.

This study compared three methods for the determination of the slow crack growth susceptibility coefficient (n) of two veneering ceramics (VM7 and d.Sign), two glass-ceramics (Empress and Empress 2) and a glass-infiltrated alumina composite (In-Ceram Alumina). Discs (n = 10) were prepared according to manufacturers' recommendations and polished. The constant stress-rate test was performed at five constant stress rates to calculate n(d) . For the indentation fracture test to determine n(IF) , Vickers indentations were performed and the crack lengths were measured under an optical microscope. For the constant stress test (performed only for d.Sign for the determination of n(s) ) four constant stresses were applied and held constant until the specimens' fracture and the time to failure was recorded. All tests were performed in artificial saliva at 37°C. The n(d) values were 17.2 for Empress 2, followed by d.Sign (20.5), VM7 (26.5), Empress (30.2), and In-Ceram Alumina (31.1). In-Ceram Alumina and Empress 2 showed the highest n(IF) values, 66.0 and 40.2, respectively. The n(IF) values determined for Empress (25.2), d.Sign (25.6), and VM7 (20.1) were similar. The n(s) value determined for d.Sign was 31.4. It can be concluded that the n values determined for the dental ceramics evaluated were significantly influenced by the test method used. 2011 Wiley Periodicals, Inc.

Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficientmethods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272)

Full Text Available An overview on the development of hybrid fundamental solution based finite elementmethod (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.

This textbook offers theoretical and practical knowledge of the finite elementmethod. The book equips readers with the skills required to analyze engineering problems using ANSYS®, a commercially available FEA program. Revised and updated, this new edition presents the most current ANSYS® commands and ANSYS® screen shots, as well as modeling steps for each example problem. This self-contained, introductory text minimizes the need for additional reference material by covering both the fundamental topics in finite elementmethods and advanced topics concerning modeling and analysis. It focuses on the use of ANSYS® through both the Graphics User Interface (GUI) and the ANSYS® Parametric Design Language (APDL). Extensive examples from a range of engineering disciplines are presented in a straightforward, step-by-step fashion. Key topics include: • An introduction to FEM • Fundamentals and analysis capabilities of ANSYS® • Fundamentals of discretization and approximation functions • Modeling techniq...

Recently, graphics processing units (GPUs) have had great success in accelerating numerical computations. We present their application to computations on unstructured meshes such as those in finite elementmethods. Multiple approaches in assembling and solving sparse linear systems with NVIDIA GPUs and the Compute Unified Device Architecture (CUDA) are presented and discussed. Multiple strategies for efficient use of global, shared, and local memory, methods to achieve memory coalescing, and optimal choice of parameters are introduced. We find that with appropriate preprocessing and arrangement of support data, the GPU coprocessor achieves speedups of 30x or more in comparison to a well optimized serial implementation on the CPU. We also find that the optimal assembly strategy depends on the order of polynomials used in the finite-element discretization.

For thin shells with general loading, sixteen degrees of freedom have been used for a previous finite element solution procedure using a Collocation method instead of the usual variational based procedures. Although the number of elements required was relatively small, nevertheless the final matrix for the simultaneous solution of all unknowns could become large for a complex compound structure. The purpose of the present paper is to demonstrate a method of reducing the final matrix size, so allowing solution for large structures with comparatively small computer storage requirements while retaining the accuracy given by high order displacement functions. Collocation points, a number are equilibrium conditions which must be satisfied independently of the overall compatibility of forces and deflections for a complete structure. (Auth.)

and the finite elementmethod. The material microstructure of the heterogeneous material is non-destructively determined using X-ray microtomography. A software program has been generated which uses the X-ray tomographic data as an input for the mesh generation of the material microstructure. To obtain a proper...... which are used for the determination of the effective properties of the heterogeneous material. Generally, the properties determined using the finite elementmethod coupled with X-ray microtomography are in good agreement with both experimentally determined properties and properties determined using......The use of cellular and composite materials have in recent years become more and more common in all kinds of structural components and accurate knowledge of the effective properties is therefore essential. In this wok the effective properties are determined using the real material microstructure...

This paper extends previous results on nonlinear Schwarz preconditioning ([4]) to unstructured finite element elliptic problems exploiting now nonlocal (but small) subspaces. The non-local finite element subspaces are associated with subdomains obtained from a non-overlapping element partitioning of the original set of elements and are coarse outside the prescribed element subdomain. The coarsening is based on a modification of the agglomeration based AMGe method proposed in [8]. Then, the algebraic construction from [9] of the corresponding non-linear finite element subproblems is applied to generate the subspace based nonlinear preconditioner. The overall nonlinearly preconditioned problem is solved by an inexact Newton method. Numerical illustration is also provided.

The finite elementmethod (FEM) is a mathematical technique using modern computer technology for stress analysis, and has been gradually used in simulating human body structures in the biomechanical field, especially more widely used in the research of thoracolumbar spine traumatology. This paper reviews the establishment of the thoracolumbar spine FEM, the verification of the FEM, and the thoracolumbar spine FEM research status in different fields, and discusses its prospects and values in forensic thoracolumbar traumatology.

This paper presents a numerical method which describes fast dynamic problems in flow transient situations as in nuclear plants. A finite element formulation has been chosen; it is described by a preprocessor in CASTEM system: GIBI code. For these typical flow problems, an A.L.E. formulation for physical equations is used. So, some applications are presented: the well known problem of shock tube, the same one in 2D case and a last application to hydrogen detonation

Effect of kinetic properties of a series of extraction systems on the separation of certain elements by the method of liquid chromatography with free fixed phase is considered. Chromatographic behaviour of europium 3 and iron 3 ions when using systems based on di-2-ethylhexylphosphovers acid and tetraphenylmethylenediphosphine dioxide is investigated. Kinetic properties of the extraction systems used are studied by diffusion cell method with mixing, europium 3 and iron 3 mass transfer coefficients are determined

Effective Nd segregation coefficient in the Nd:YAG (Nd-doped Y 3 Al 5 O 12 ) crystal growth by pulling method was determined precisely over 0 -- 1.3 atom% Nd concentration range at a 0.6 mm hr -1 growth rate. Two Nd:YAG crystals (-- 20 g) were grown from a large melt (-- 1 kg). Neodymium concentrations in the crystals and residual melts were estimated by fluorescent X-ray analysis, and a value of 0.21 was obtained as the effective segregation coefficient. Next, the optical absorption coefficient of Nd:YAG crystal at 5889 A absorption peak was measured in order to analyze a small specimen for Nd by optical absorption measurements. The optical absorption coefficient of 0.97 mm -1 .atom% -1 was determined in this way. The Nd concentrations, calculated by the segregation coefficient, agreed well with those obtained by optical absorption measurements at 5889 A for six successively grown Nd:YAG crystals. Therefore, the obtained segregation coefficient, 0.21, was confirmed as a reliable value for the Nd:YAG crystal growth by the pulling method. (auth.)

High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark ElementMethod (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

1) The concept of an effective removal cross section has been developed in order more easily to compute reactor shielding thicknesses. We have built an experimental facility for the purpose of measuring effective removal cross sections, the value of which had not been published at that time. The first part of this paper describes the device or facility used, the computation method applied, and the results obtained. 2) Starting from this concept, we endeavoured to define a removal cross section as a function of energy. This enabled us to use the method for computations bearing on the attenuation of fast neutrons of any spectrum. An experimental verification was carried out for the case of fission neutrons filtered by a substantial thickness of graphite. 3) Finally, we outline a computation method enabling us to determine the sources of captured gamma rays by the age theory and we give an example of the application in a composite shield. (author) [French] 1) La notion de section efficace effective de deplacement a ete introduite pour calculer commodement les epaisseurs de protection des reacteurs. Nous avons construit un dispositif experimental destine a mesurer les sections efficaces effectives de deplacement dont la valeur n'avait pas ete publiee a cette epoque. La premiere partie de cette communication decrit le dispositif utilise, la methode de calcul employee et les resultats obtenus. 2) A partir de cette notion, nous avons essaye de definir une section efficace de deplacement fonction de l'energie. Ceci permet d'utiliser la methode du deplacement pour des calculs d'attenuation de neutrons rapides dont le spectre est quelconque. Une verification experimentale a ete faite dans le cas de neutrons de fission filtres par une epaisseur notable de graphite. 3) Enfin une methode de calcul permettant de determiner les sources de gamma de capture par la theorie de l'age est exposee et un exemple d'application donne dans une protection composite. (auteur)

The aim of this work is to study methods for solving the diffusion equation, based on a primal or mixed-dual finite elements discretization and well suited for use on multiprocessors computers; domain decomposition methods are the subject of the main part of this study, the linear systems being solved by the block-Jacobi method. The origin of the diffusion equation is explained in short, and various variational formulations are reminded. A survey of iterative methods is given. The elemination of the flux or current is treated in the case of a mixed method. Numerical tests are performed on two examples of reactors, in order to compare mixed elements and Lagrange elements. A theoretical study of domain decomposition is led in the case of Lagrange finite elements, and convergence conditions for the block-Jacobi method are derived; the dissection decomposition is previously the purpose of a particular numerical analysis. In the case of mixed-dual finite elements, a study is led on examples and is confirmed by numerical tests performed for the dissection decomposition; furthermore, after being justified, decompositions along axes of symmetry are numerically tested. In the case of a decomposition into two subdomains, the dissection decomposition and the decomposition with an integrated interface are compared. Alternative directions methods are defined; the convergence of those relative to Lagrange elements is shown; in the case of mixed elements, convergence conditions are found [fr

The convection heat transfer coefficient is one of the evaluation indexes of the brake disc performance. The method used in this paper to calculate the convection heat transfer coefficient is a fluid-solid coupling simulation method, because the calculation results through the empirical formula method have great differences. The model, including a brake disc, a car body, a bogie and flow field, was built, meshed and simulated in the software FLUENT. The calculation models were K-epsilon Standard model and Energy model. The working condition of the brake disc was considered. The coefficient of various parts can be obtained through the method in this paper. The simulation result shows that, under 160 km/h speed, the radiating ribs have the maximum convection heat transfer coefficient and the value is 129.6W/(m2·K), the average coefficient of the whole disc is 100.4W/(m2·K), the windward of ribs is positive-pressure area and the leeward of ribs is negative-pressure area, the maximum pressure is 2663.53Pa.

A fractional-step splitting scheme breaks the full Navier-Stokes equations into explicit and implicit portions amenable to the calculus of variations. Beginning with the functional forms of the Poisson and Helmholtz equations, we substitute finite expansion series for the dependent variables and derive the matrix equations for the unknown expansion coefficients. This method employs a new splitting scheme which differs from conventional three-step (nonlinear, pressure, viscous) schemes. The nonlinear step appears in the conventional, explicit manner, the difference occurs in the pressure step. Instead of solving for the pressure gradient using the nonlinear velocity, we add the viscous portion of the Navier-Stokes equation from the previous time step to the velocity before solving for the pressure gradient. By combining this 'predicted' pressure gradient with the nonlinear velocity in an explicit term, and the Crank-Nicholson method for the viscous terms, we develop a Helmholtz equation for the final velocity.

1. The concept of an effective removal cross section has been developed in order more easily to compute reactor shielding thicknesses. We have built an experimental facility for the purpose of measuring effective removal cross sections, the value of which had not been published at that time. The first part of this paper describes the device or facility used, the computation method applied, and the results obtained. 2. Starting from this concept, we endeavored to define a removal cross section as a function of energy. This enabled us to use the method for computations bearing on the attenuation of fast neutrons of any spectrum. An experimental verification was carried out for the case of fission neutrons filtered by a substantial thickness of graphite. 3. Finally, we outline a computation method enabling us to determine the sources of captured gamma rays by the age theory and we give an example of the application in a composite shield. (author)Fren. [French] 1. La notion de section efficace effective de deplacement a ete introduite pour calculer commodement les epaisseurs de protection des reacteurs. Nous avons construit un dispositif experimental destine a mesurer les sections efficaces effectives de deplacement dont la valeur n'avait pas ete publiee a cette epoque. La premiere partie de cette communication decrit le dispositif utilise, la methode de calcul employee et les resultats obtenus. 2. A partir de cette notion, nous avons essaye de definir une section efficace de deplacement fonction de l'energie. Ceci permet d'utiliser la methode du deplacement pour des calculs d'attenuation de neutrons rapides dont le spectre est quelconque. Une verification experimentale a ete faite dans le cas de neutrons de fission filtres par une epaisseur notable de graphite. 3. Enfin une mde de calcul permettant de determiner les sources de gamma de capture par la theorie de l'age est exposee et un exemple d'application donne dans une protection composite. (auteur)

The R-function method is applied to the multidimensional steady-state neutron diffusion equation. Using a variational principle the nested element approximation is formulated. Trial functions taking into account the geometrical shape of material regions are constructed. The influence of both the surrounding regions and the corner singularities at the external boundary is incorporated into the approximate solution. Benchmark calculations show that such an approximation can yield satisfactory results. Moreover, in the case of complex geometry, the presented approach would result in a significant reduction of the number of unknowns compared to other methods

The blade element momentum (BEM) theory is widely used in aerodynamic performance calculations and optimization applications for wind turbines. The fixed point iterative method is the most commonly utilized technique to solve the BEM equations. However, this method sometimes does not converge...... are addressed through both theoretical analysis and numerical tests. A term from the BEM equations equals to zero at a critical inflow angle is the source of the convergence problems. When the initial inflow angle is set larger than the critical inflow angle and the relaxation methodology is adopted...

An iterative method for solving the system of nonlinear equations of the drift-diffusion representation for the simulation of the semiconductor devices is worked out. The Petrov-Galerkin method is taken for the discretization of these equations using the bilinear finite elements. It is shown that the numerical scheme is a monotonous one and there are no oscillations of the solutions in the region of p-n transition. The numerical calculations of the simulation of one semiconductor device are presented. 13 refs.; 3 figs

The code DELFIN has been implemented for the solution of the neutrons diffusion equations in two dimensions obtained by applying the approximation of several groups of energy. The code works with any number of groups and regions, and can be applied to thermal reactors as well as fast reactor. Providing it with the diffusion coefficients, the effective sections and the fission spectrum we obtain the results for the systems multiplying constant and the flows of each groups. The code was established using the method of finite elements, which is a form of resolution of the variational formulation of the equations applying the Ritz-Galerkin method with continuous polynomial functions by parts, in one case of the Lagrange type with rectangular geometry and up to the third grade. The obtained results and the comparison with the results in the literature, permit to reach the conclusion that it is convenient, to use the rectangular elements in all the cases where the geometry permits it, and demonstrate also that the finite elementsmethod is better than the finite differences method. (author)

INTRODUCTION: The finite elementmethod (FEM) is an engineering resource applied to calculate the stress and deformation of complex structures, and has been widely used in orthodontic research. With the advantage of being a non-invasive and accurate method that provides quantitative and detailed data on the physiological reactions possible to occur in tissues, applying the FEM can anticipate the visualization of these tissue responses through the observation of areas of stress created from applied orthodontic mechanics. OBJECTIVE: This article aims at reviewing and discussing the stages of the finite elementmethod application and its applicability in Orthodontics. RESULTS: FEM is able to evaluate the stress distribution at the interface between periodontal ligament and alveolar bone, and the shifting trend in various types of tooth movement when using different types of orthodontic devices. Therefore, it is necessary to know specific software for this purpose. CONCLUSIONS: FEM is an important experimental method to answer questions about tooth movement, overcoming the disadvantages of other experimental methods. PMID:25992996

The International Conference on Boundary ElementMethods in Engineering was started in 1978 with the following objectives: i) To act as a focus for BE research at a time when the technique wasjust emerging as a powerful tool for engineering analysis. ii) To attract new as weIl as established researchers on Boundary Elements, in order to maintain its vitality and originality. iii) To try to relate the Boundary ElementMethod to other engineering techniques in an effort to help unify the field of engineering analysis, rather than to contribute to its fragmentation. These objectives were achieved during the last 7 conferences and this meeting - the eighth - has continued to be as innovative and dynamic as any ofthe previous conferences. Another important aim ofthe conference is to encourage the participation of researchers from as many different countries as possible and in this regard it is a policy of the organizers to hold the conference in different locations. It is easy to forget when working on scientific ...

The report describes a test which was conducted to determine the variation in thermal expansion coefficients of specimens from several material heats of Type 304 stainless steel. The purpose of this document is to identify the procedures, equipment, and analysis used in performing this test. From a review of the data which were used in establishing the values given for mean coefficient of thermal expansion in the 1968 ASME Boiler and Pressure Vessel Code, Section III, a +-3.3-percent maximum variation was determined for Type 304 CRES in the temperature range of interest. The results of the test reduced this variation to +-0.53 percent based on a 95/99-percent tolerance interval for the material tested. The testing equipment, procedure, and analysis are not complicated and this type of test is recommended for applications in which the variation in thermal expansion coefficients is desired for a limited number of material heats

The diffusion coefficient is one of the parameters necessary for the obtention of the extraction exponential coefficients, that are contained within the H.T.U. (height of transfer unity) calculation expression, when operating with continuous organic phase. The organic phase used was tri-n-butyl-phosphate (TBP) and varsol in the 35% and 65% proportions respectively. After each experiment, the uranium content present in each compartment was spectrophotometrically determined and the quantities contained in the aqueous phases were determined by means of volumetric titration. It was found out that the uranyl ion diffusion coefficient is two and one half times less in organic phase, this just being attributed to the greater interactions of the uranyl ions in organic than in aqueous medium

Plants and animals may be exposed to ionizing radiation from radionuclides in the environment. This paper describes the underlying data and assumptions to assess doses to biota due to internal and external exposure for a wide range of masses and shapes living in various habitats. A dosimetric module is implemented which is a user-friendly and flexible possibility to assess dose conversion coefficients for aquatic and terrestrial biota. The dose conversion coefficients have been derived for internal and various external exposure scenarios. The dosimetric model is linked to radionuclide decay and emission database, compatible with the ICRP Publication 38, thus providing a capability to compute dose conversion coefficients for any nuclide from the database and its daughter nuclides. The dosimetric module has been integrated into the ERICA Tool, but it can also be used as a stand-alone version

This research looked for a method to determine the binary diffusion coefficient D of salts in liquids (especially in drilling fluids) not only accurately, but in a reasonable time. We chose to use the Taylor Dispersion Method. This technique has been used for measuring binary diffusion coefficients in gaseous, liquid and supercritical fluids, due to its simplicity and accuracy. In the method, the diffusion coefficient is determined by the analysis of the dispersion of a pulse of soluble material in a solvent flowing laminarly through a tube. This work describes the theoretical basis and the experimental requirements for the application of the Taylor Dispersion Method, emphasizing the description of our experiment. A mathematical formulation for both Newtonian and non-Newtonian fluids is presented. The relevant sources of errors are discussed. The experimental procedure and associated analysis are validated by applying the method in well known systems, such as NaCl in water.D of salts in liquids (especially in drilling fluids) not only accurately, but in a reasonable time. We chose to use the Taylor Dispersion Method. This technique has been used for measuring binary diffusion coefficients in gaseous, liquid and supercritical fluids, due to its simplicity and accuracy. In the method, the diffusion coefficient is determined by the analysis of the dispersion of a pulse of soluble material in a solvent flowing laminarly through a tube. This work describes the theoretical basis and the experimental requirements for the application of the Taylor Dispersion Method, emphasizing the description of our experiment. A mathematical formulation for both Newtonian and non-Newtonian fluids is presented. The relevant sources of errors are discussed. The experimental procedure and associated analysis are validated by applying the method in well known systems, such as NaCl in water. (author)

The authors have pointed out, in the latest report, that DEM (Distinct ElementMethod) seems to be a very helpful numerical method to examine the stability of fissured rock slopes, in which toppling failure would occur during earthquakes. In this report, the applicability of DEM for such rock slopes is examined through the following comparisons between theoretical results and DEM results, referring Voegele's works (1982): (1) Stability of one block on a slope. (2) Failure of a rock block column composed of 10 same size rectangular blocks. (3) Cable force required to make a slope stable. Through above 3 comparisons, it seems that DEM give the reasonable results. Considering that these problems may not be treated by the other numerical methods such as FEM and so on, so DEM seems to be a very useful method for fissured rock slope analysis. (author)

We analyse the nonconforming Virtual ElementMethod (VEM) for the approximation of elliptic eigenvalue problems. The nonconforming VEM allow to treat in the same formulation the two- and three-dimensional case.We present two possible formulations of the discrete problem, derived respectively by the nonstabilized and stabilized approximation of the L2-inner product, and we study the convergence properties of the corresponding discrete eigenvalue problems. The proposed schemes provide a correct approximation of the spectrum and we prove optimal-order error estimates for the eigenfunctions and the usual double order of convergence of the eigenvalues. Finally we show a large set of numerical tests supporting the theoretical results, including a comparison with the conforming Virtual Element choice.

Boundary elementmethod (BEM) is a numerical technique that used for modeling infinite domain as is the case for galvanic corrosion analysis. The use of boundary element analysis system (BEASY) has allowed cathodic protection (CP) interference to be assessed in terms of the normal current density, which is directly proportional to the corrosion rate. This paper was present the analysis of the galvanic corrosion between Aluminium and Carbon Steel in natural sea water. The result of experimental was validated with computer simulation like BEASY program. Finally, it can conclude that the BEASY software is a very helpful tool for future planning before installing any structure, where it gives the possible CP interference on any nearby unprotected metallic structure. (Author)

Distributing integral error uniformly over variable subdomains, or finite elements, is an attractive criterion by which to subdivide a domain for the Galerkin/finite elementmethod when localized steep gradients and high curvatures are to be resolved. Examples are fluid interfaces, shock fronts and other internal layers, as well as fluid mechanical and other boundary layers, e.g. thin-film states at solid walls. The uniform distribution criterion is developed into an adaptive technique for one-dimensional problems. Nodal positions can be updated simultaneously with nodal values during Newton iteration, but it is usually better to adopt nearly optimal nodal positions during Newton iteration upon nodal values. Three illustrative problems are solved: steady convection with diffusion, gradient theory of fluid wetting on a solid surface and Buckley-Leverett theory of two phase Darcy flow in porous media

This work reports an alternative methodology for the linear attenuation coefficient determination (μ ρ) of irregular form samples, in such a way that is not necessary to consider the sample thickness. With this methodology, indigenous archaeological ceramics fragments from the region of Londrina, north of Parana, were studied. These ceramics fragments belong to the Kaingaing and Tupiguarani traditions. The equation for the μ ρ determination employing the two mean method was obtained and it was used for μ ρ determination by the gamma ray beam attenuation if immersed ceramics, by turns, in two different means with known linear attenuation coefficient. By the other side, μ theoretical value was determined with the XCOM computer code. This code uses as input the ceramics chemistry composition and provides an energy versus mass attenuation coefficient table. In order to validate the two mean method validation, five ceramics samples of thickness 1.15 cm and 1.87 cm were prepared with homogeneous clay. Using these ceramics, μ ρ was determined using the attenuation method, and the two mean method. The result obtained for μ ρ and its respective deviation were compared for these samples, for the two methods. With the obtained results, it was concluded that the two means method is good for the linear attenuation coefficient determination of materials of irregular shape, what is suitable, specially, for archaeometric studies. (author)

The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361

The piezoelectric coefficients (d{sub 33}, -d{sub 31}, d{sub 15}, g{sub 33}, -g{sub 31}, g{sub 15}) of soft and hard lead zirconate titanate ceramics were measured by the quasi-static and resonance methods, at temperatures from 20 to 300 {sup 0}C. The results showed that the piezoelectric coefficients d{sub 33}, -d{sub 31} and d{sub 15} obtained by these two methods increased with increasing temperature for both hard and soft PZT ceramics, while the piezoelectric coefficients g{sub 33}, -g{sub 31} and g{sub 15} decreased with increasing temperature for both hard and soft PZT ceramics. In this paper, the observed results were also discussed in terms of intrinsic and extrinsic contributions to piezoelectric response.

Quantum walks are roughly analogous to classical random walks, and similar to classical walks they have been used to find new (quantum) algorithms. When studying the behavior of large graphs or combinations of graphs, it is useful to find the response of a subgraph to signals of different frequencies. In doing so, we can replace an entire subgraph with a single vertex with variable scattering coefficients. In this paper, a simple technique for quickly finding the scattering coefficients of any discrete-time quantum graph will be presented. These scattering coefficients can be expressed entirely in terms of the characteristic polynomial of the graph’s time step operator. This is a marked improvement over previous techniques which have traditionally required finding eigenstates for a given eigenvalue, which is far more computationally costly. With the scattering coefficients we can easily derive the “impulse response” which is the key to predicting the response of a graph to any signal. This gives us a powerful set of tools for rapidly understanding the behavior of graphs or for reducing a large graph into its constituent subgraphs regardless of how they are connected

We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version to the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∝30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N element of C. Integrals with a power-like divergence in N-space∝a N , a element of R, a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.

We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version to the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∝30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N element of C. Integrals with a power-like divergence in N-space∝a{sup N}, a element of R, a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.

Compared to other fields of engineering, in mechanical engineering, the Discrete ElementMethod (DEM) is not yet a well known method. Nevertheless, there is a variety of simulation problems where the method has obvious advantages due to its meshless nature. For problems where several free bodies can collide and break after having been largely deformed, the DEM is the method of choice. Neighborhood search and collision detection between bodies as well as the separation of large solids into smaller particles are naturally incorporated in the method. The main DEM algorithm consists of a relatively simple loop that basically contains the three substeps contact detection, force computation and integration. However, there exists a large variety of different algorithms to choose the substeps to compose the optimal method for a given problem. In this contribution, we describe the dynamics of particle systems together with appropriate numerical integration schemes and give an overview over different types of particle interactions that can be composed to adapt the method to fit to a given simulation problem. Surface triangulations are used to model complicated, non-convex bodies in contact with particle systems. The capabilities of the method are finally demonstrated by means of application examples

Recent study considers the tribological characteristics of the sintered bushings used in the connecting nodes brake lever system of railway cars. Particular attention is paid sleeves low content of alloying elements. Bushings had been prepared by powder metallurgy route by using low alloyed powders of Fe-Cu-C system. Porosity after sintering was about 20%. Generally, before using material was impregnated by industrial mineral oil in order to improve friction condition. In the recent study we use new lubricating compositions for impregnating in sintered bodies. Such compositions consist of basic mineral oil with addition of 4 wt.% of layered tungsten dichalcogenides (WS2 and WSe2) nanoparticles, which were ultrasonically dispersed. Tungsten disulphide nanoparticles have spherical shape with the diameter of 30-50 nm, and diselenide nanoparticles have a flat shape with the mean dimensions of 5x70 nm. Tribological testing of the product was provided. Sintered bushings impregnated with commercial oil and suspension of nanoparticles were tested in the spinning friction conditions in the couple with bearing steel at the load of 210 N and spinning rate of 200 rpm. The friction test in couple with steel exhibited the value of friction moment to be about 2 times less as compared with commercial oil. The additions of tungsten disulphide nanoparticles also significantly decrease oscillations the friction torque.

This paper is a summary of the work carried out during the last two years in fuel burning measurements at RECH-1 for different enrichments, cooling times and burning rates. The measurements were made in two gamma-spectrometric facilities, one is installed in a hot cell and the other inside of the secondary pool of the RECH-1, where the element is under 2 meters of water. The hot cell measurements need at least 100 cooling days because of the problems generated by the transport of highly active fuel elements from the Reactor to the cell. This was the main reason for using the in-pool facility because of its capability to measure the burning of fuel elements without having to wait so long, that is with only 5 cooling days. The accumulated experience in measurements achieved in both facilities and the encouraging results show that this measuring method is reliable. The results agreed well with those obtained using the reactor's physics codes, which was the way they were obtained previously (Cw)

A spectral elementmethod (SEM) is developed to solve polarized radiative transfer in multidimensional participating medium. The angular discretization is based on the discrete-ordinates approach, and the spatial discretization is conducted by spectral element approach. Chebyshev polynomial is used to build basis function on each element. Four various test problems are taken as examples to verify the performance of the SEM. The effectiveness of the SEM is demonstrated. The h and the p convergence characteristics of the SEM are studied. The convergence rate of p-refinement follows the exponential decay trend and is superior to that of h-refinement. The accuracy and efficiency of the higher order approximation in the SEM is well demonstrated for the solution of the VRTE. The predicted angular distribution of brightness temperature and Stokes vector by the SEM agree very well with the benchmark solutions in references. Numerical results show that the SEM is accurate, flexible and effective to solve multidimensional polarized radiative transfer problems.

The existing technologies concerning amorphous thin film semiconductor elements are the technologies concerning the formation of either a thin film transistor or an amorphous Si solar cell on a substrate. In order to drive a thin film transistor for electronic equipment control by the output power of an amorphous Si solar cell, it has been obliged to drive the transistor weth an amorphous solar cell which was formed on a substrate different from that for the transistor. Accordingly, the space for the amorphous solar cell, which was formed on the different substrate, was additionally needed on the substrate for the thin film transistor. In order to solve the above problem, this invention proposes an operating method of an amorphous thin film semiconductor element that after forming an amorphous Si solar cell through lamination on the insulation coating film which covers the thin film transistor formed on the substrate, the thin film transistor is driven by the output power of this solar cell. The invention eliminates the above superfluous space and reduces the size of the amorphous thin film semiconductor element including the electric source. (3 figs)

This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite elementmethods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.

In designing the ship’s structure, it should refer to the rules in accordance with applicable classification standards. In this case, designing Ladder (Staircase) on a Ferry Ship which is set up, it must be reviewed based on the loads during ship operations, either during sailing or at port operations. The classification rules in ship design refer to the calculation of the structure components described in Classification calculation method and can be analysed using the Finite ElementMethod. Classification Regulations used in the design of Ferry Ships used BKI (Bureau of Classification Indonesia). So the rules for the provision of material composition in the mechanical properties of the material should refer to the classification of the used vessel. The analysis in this structure used program structure packages based on Finite ElementMethod. By using structural analysis on Ladder (Ladder), it obtained strength and simulation structure that can withstand load 140 kg both in static condition, dynamic, and impact. Therefore, the result of the analysis included values of safety factors in the ship is to keep the structure safe but the strength of the structure is not excessive.

We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order μ and the effective dimension ν, with ν<< N, and N the nominal dimension. Numerical tests for multi-dimensional integration and for stochastic elliptic problems suggest that ν≥μ for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

The application of the semi-Lagrangian particle finite elementmethod (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.

The proposal concerns an additional protection against leakage of a FE-transport container for interim storage of spent fuel elements. The gastight container has a second cover placed at a short distance from the first cover. The intermediate hollow space can be connected with a measuring system which indicates if part of the trace gas (mostly helium) added as indicator has escaped from the container due to leakage. The description explains the method and the assembly of required lines and measuring points etc. (UWI) [de

Full Text Available In this contribution modeling and simulation of surface acoustic waves (SAW sensor using finite elementmethod will be presented. SAW sensor is made from piezoelectric GaN layer and SiC substrate. Two different analysis types are investigated - modal and transient. Both analyses are only 2D. The goal of modal analysis, is to determine the eigenfrequency of SAW, which is used in following transient analysis. In transient analysis, wave propagation in SAW sensor is investigated. Both analyses were performed using FEM code ANSYS.

This report outlines a treatment scheme for separating and concentrating the transuranic (TRU) elements present in aqueous waste solutions stored at Argonne National Laboratory (ANL). The treatment method selected is carrier precipitation. Potential carriers will be evaluated in future laboratory work, beginning with ferric hydroxide and magnetite. The process will result in a supernatant with alpha activity low enough that it can be treated in the existing evaporator/concentrator at ANL. The separated TRU waste will be packaged for shipment to the Waste Isolation Pilot Plant

The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.

Octanol/water partition coefficient (logP) and aqueous solubility (logS) are two important parameters in pharmacology and toxicology studies, and experimental measurements are usually time-consuming and expensive. In the present research, novel methods are presented for the estim...

We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…

A ballistic calculation of a full quantum mechanical system is presented to study 2D nanoscale devices. The simulation uses the nonequilibrium Green's function (NEGF) approach to calculate the transport properties of the devices. While most available software uses the finite difference discretization technique, our work opts to formulate the NEGF calculation using the finite elementmethod (FEM). In calculating a ballistic device, the FEM gives some advantages. In the FEM, the floating boundary condition for ballistic devices is satisfied naturally. This paper gives a detailed finite element formulation of the NEGF calculation applied to a double-gate MOSFET device with a channel length of 10 nm and a body thickness of 3 nm. The potential, electron density, Fermi functions integrated over the transverse energy, local density of states and the transmission coefficient of the device have been studied. We found that the transmission coefficient is significantly affected by the top of the barrier between the source and the channel, which in turn depends on the gate control. This supports the claim that ballistic devices can be modelled by the transport properties at the top of the barrier. Hence, the full quantum mechanical calculation presented here confirms the theory of ballistic transport in nanoscale devices.

Full Text Available The analytical method for predicting the dynamic responses of a ship in a collision scenario features speed and accuracy,and the external dynamics constitute an important part. A 3D simplified analytical method is implemented by MATLAB and used to calculate the energy dissipation of ship-ship collisions. The results obtained by the proposed method are then compared with those of a 2D simplified analytical method. The total dissipated energy can be obtained through the proposed analytical method, and the influence of the collision heights,angles and locations on the dissipated energy is discussed on that basis. Furthermore,the effects of restitution on the conservative coefficients and the effects of conservative coefficients on energy dissipation are discussed. It is concluded that the proposed 3D analysis yields a lesser energy dissipation than that of the 2D analysis,and the collision height has a significant influence on the dissipated energy. In using the proposed simplified method,it is not safe to simplify the conservative coefficient as zero when the collision angle is greater than 90 degrees. In the future research, to get more accurate energy dissipation, it is a good way to adopt the 3D simplified analytical method instead of the 2D method.

Further analysis of droplet migration in a temperature gradient field indicates that different terms can be used to evaluate the solute diffusion coefficient in liquid (D{sub L}) and that there exists a characteristic curve that can describe the motion of all the droplets for a given composition and temperature gradient. Critical experiments are subsequently conducted in succinonitrile (SCN)-salol and SCN-camphor transparent alloys in order to observe dynamic migration processes of a number of droplets. The derived diffusion coefficients from different terms are the same within experimental error. For SCN-salol alloys, D{sub L} = (0.69 {+-} 0.05) x 10{sup -3} mm{sup 2}/s, and for SCN-camphor alloys, D{sub L} = (0.24 {+-} 0.02) x 10{sup -3} mm{sup 2}/s.

Full Text Available We expand the application of the enhanced multistage homotopy perturbation method (EMHPM to solve delay differential equations (DDEs with constant and variable coefficients. This EMHPM is based on a sequence of subintervals that provide approximate solutions that require less CPU time than those computed from the dde23 MATLAB numerical integration algorithm solutions. To address the accuracy of our proposed approach, we examine the solutions of several DDEs having constant and variable coefficients, finding predictions with a good match relative to the corresponding numerical integration solutions.

Aerosol extinction coefficient profile is an essential parameter for atmospheric radiation model. It is difficult to get higher signal to noise ratio (SNR) of backscattering lidar from the ground to the tropopause especially in near range. Higher SNR problem can be solved by combining side-scattering and backscattering lidar. Using Raman-scattering lidar, aerosol extinction to backscatter ratio (lidar ratio) can be got. Based on side-scattering, backscattering and Raman-scattering lidar system, aerosol extinction coefficient is retrieved precisely from the earth's surface to the tropopause. Case studies show this method is reasonable and feasible.

We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.

A least squares principle is described which uses a penalty function treatment of boundary and interface conditions. Appropriate choices of the trial functions and vectors employed in a dual representation of an approximate solution established complementary principles for the diffusion equation. A geometrical interpretation of the principles provides weighted residual methods for diffusion theory, thus establishing a unification of least squares, variational and weighted residual methods. The complementary principles are used with either a trial function for the flux or a trial vector for the current to establish for regular meshes a connection between finite element, finite difference and nodal methods, which can be exact if the mesh pitches are chosen appropriately. Whereas the coefficients in the usual nodal equations have to be determined iteratively, those derived via the complementary principles are given explicitly in terms of the data. For the further development of the connection between finite element, finite difference and nodal methods, some hybrid variational methods are described which employ both a trial function and a trial vector. (author)

HAL QCD method is a method to construct a potential (HAL QCD potential) that reproduces the NN scattering phase shift faithful to the QCD. The HAL QCD potential is obtained from QCD by eliminating the degrees of freedom of quarks and gluons and leaving only two particular hadrons. Therefor, in the effective quantum mechanics of two nucleons defined by HAL QCD potential, the conserved current consists not only of the nucleon current but also an extra current originating from the potential (two-body current). Though the form of the two-body current is closely related to the potential, it is not straight forward to extract the former from the latter. In this work, we derive the the current matrix element formula in the quantum mechanics defined by the HAL QCD potential. As a first step, we focus on the non-relativistic case. To give an explicit example, we consider a second quantized non-relativistic two-channel coupling model which we refer to as the original model. From the original model, the HAL QCD potential for the open channel is constructed by eliminating the closed channel in the elastic two-particle scattering region. The current matrix element formula is derived by demanding the effective quantum mechanics defined by the HAL QCD potential to respond to the external field in the same way as the original two-channel coupling model.

This work uses a discontinuous-Galerkin spectral-elementmethod (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

After outlining the space and time discretization methods used in the N3S thermal hydraulic code developed at EDF/NHL, we describe the possibilities of the peripheral version, the Adaptative Mesh, which comprises two separate parts: the error indicator computation and the development of a module subdividing elements usable by the solid dynamics code ASTER and the electromagnetism code TRIFOU also developed by R and DD. The error indicators implemented in N3S are described. They consist of a projection indicator quantifying the space error in laminar or turbulent flow calculations and a Navier-Stokes residue indicator calculated on each element. The method for subdivision of triangles into four sub-triangles and tetrahedra into eight sub-tetrahedra is then presented with its advantages and drawbacks. It is illustrated by examples showing the efficiency of the module. The last concerns the 2 D case of flow behind a backward-facing step. (authors). 9 refs., 5 figs., 1 tab

Heat conduction problems are very often found in science and engineering fields. It is of accrual importance to determine quantitative descriptions of this important physical phenomena. This paper discusses the development and application of a numerical formulation and computation that can be used to analyze heat conduction problems. The mathematical equation which governs the physical behaviour of heat conduction is in the form of second order partial differential equations. The numerical resolution used in this paper is performed using the finite elementmethod and Fourier series, which is known as semi-analytical finite elementmethods. The numerical solution results in simultaneous algebraic equations which is solved using the Gauss elimination methodology. The computer implementation is carried out using FORTRAN language. In the final part of the paper, a heat conduction problem in a rectangular plate domain with isothermal boundary conditions in its edge is solved to show the application of the computer program developed and also a comparison with analytical solution is discussed to assess the accuracy of the numerical solution obtained

Full Text Available In this paper, stress behavior of shallow tunnels under simultaneous non-uniform surface traction and symmetric gravity loading was studied using a direct boundary elementmethod (BEM. The existing full-plane elastostatic fundamental solutions to displacement and stress fields were used and implemented in a developed algorithm. The cross-section of the tunnel was considered in circular, square, and horseshoe shapes and the lateral coefficient of the domain was assumed as unit quantity. Double-node procedure of the BEM was applied at the corners to improve the model including sudden traction changes. The results showed that the method used was a powerful tool for modeling underground openings under various external as well as internal loads. Eccentric loads significantly influenced the stress pattern of the surrounding tunnel. The achievements can be practically used in completing and modifying regulations for stability investigation of shallow tunnels.

Full Text Available Vortex elementmethod allows to simulate unsteady hydrodynamic processes in incompressible environment, taking into account the evolution of the vortex sheet, including taking into account the deformation or moving of the body or part of construction.For the calculation of the hydrodynamic characteristics of the method based on vortex element software package was developed MVE3D. Vortex element (VE in program is symmetrical Vorton-cut. For satisfying the boundary conditions at the surface used closed frame of vortons.With this software system modeled incompressible flow around a cylindrical body protection elongation L / D = 13 with a front spherical blunt with the angle of attack of 10 °. We analyzed the distribution of the pressure coefficient on the body surface of the top and bottom forming.The calculate results were compared with known Results of experiment.Considered design schemes with different number of Vorton framework. Also varied radius of VE. Calculation make possible to establish the degree of sampling surface needed to produce close to experiment results. It has been shown that an adequate reproducing the pressure distribution in the transition region spherical cylindrical surface, on the windward side requires a high degree of sampling.Based on these results Can be possible need to improve on the design scheme of body's surface, allowing more accurate to describe the flow vorticity in areas with abrupt changes of geometry streamlined body.

This is the first of a series of papers devoted to the study of h-p spec- .... element functions defined on mesh elements in the new system of variables with a uni- ... the spectral element functions on these elements and give construction of the stability .... By Hm( ), we denote the usual Sobolev space of integer order m ≥ 0 ...

A new perfectly matched layer (PML) formulation for the time domain finite elementmethod is described and tested for Maxwell's equations. In particular, we focus on the time integration scheme which is based on Galerkin's method with a temporally piecewise linear expansion of the electric field. The time stepping scheme is constructed by forming a linear combination of exact and trapezoidal integration applied to the temporal weak form, which reduces to the well-known Newmark scheme in the case without PML. Extensive numerical tests on scattering from infinitely long metal cylinders in two dimensions show good accuracy and no signs of instabilities. For a circular cylinder, the proposed scheme indicates the expected second order convergence toward the analytic solution and gives less than 2% root-mean-square error in the bistatic radar cross section (RCS) for resolutions with more than 10 points per wavelength. An ogival cylinder, which has sharp corners supporting field singularities, shows similar accuracy in the monostatic RCS

The computation of diffusion coefficients in molecular systems ranks among the most useful applications of equilibrium molecular dynamics simulations. However, when dealing with the problem of fluid diffusion through vanishingly thin interfaces, classical techniques are not applicable. This is because the volume of space in which molecules diffuse is ill-defined. In such conditions, non-equilibrium techniques allow for the computation of transport coefficients per unit interface width, but their weak point lies in their inability to isolate the contribution of the different physical mechanisms prone to impact the flux of permeating molecules. In this work, we propose a simple and accurate method to compute the diffusional transport coefficient of a pure fluid through a planar interface from equilibrium molecular dynamics simulations, in the form of a diffusion coefficient per unit interface width. In order to demonstrate its validity and accuracy, we apply our method to the case study of a dilute gas diffusing through a smoothly repulsive single-layer porous solid. We believe this complementary technique can benefit to the interpretation of the results obtained on single-layer membranes by means of complex non-equilibrium methods.

This study proposes an element size selection method named the 'Impact-Meshing (IM) method' for a finite element waves propagation analysis model, which is characterized by (1) determination of element division of the model with strain energy in the whole model, (2) static analysis (dynamic analysis in a single time step) with boundary conditions which gives a maximum change of displacement in the time increment and inertial (impact) force caused by the displacement change. In this paper, an example of application of the IM method to 3D ultrasonic wave propagation problem in an elastic solid is described. These examples showed an analysis result with a model determined by the IM method was convergence and calculation time for determination of element subdivision was reduced to about 1/6 by the IM Method which did not need determination of element subdivision by a dynamic transient analysis with 100 time steps. (author)

An approximate formula for the computation of the free-bound emission coefficient for hydrogenic ions is presented. The approximation is obtained through a manipulation of the (free-bound) Gaunt factor which intentionally distinguish the dependence on frequency from the dependence on temperature and ionic composition. Numerical tests indicate that the derived formula is very precise, fast and easy to use, making the calculation of the free-bound contribution from an ionized region of varying temperature and ionic composition a very simple and time-saving task.

An approximate formula for the computation of the free-bound emission coefficient for hydrogenic ions is presented. The approximation is obtained through a manipulation of the (free-bound) Gaunt factor which intentionally distinguish the dependence on frequency from the dependence on temperature and ionic composition. Numerical tests indicate that the derived formula is very precise, fast and easy to use, making the calculation of the free-bound contribution from an ionized region of varying temperature and ionic composition a very simple and time-saving task. (author)

We consider problems governed by a linear elliptic equation with varying coefficients across internal interfaces. The solution and its normal derivative can undergo significant variations through these internal boundaries. We present a compact finite-difference scheme on a tree-based adaptive grid that can be efficiently solved using a natively parallel data structure. The main idea is to optimize the truncation error of the discretization scheme as a function of the local grid configuration to achieve second-order accuracy. Numerical illustrations are presented in two and three-dimensional configurations.

The use of the finite elementmethod for solving two-dimensional static neutron diffusion problems in hexagonal reactor configurations is considered. It is investigated as a possible alternative to the low-order finite difference method. Various piecewise polynomial spaces are examined for their use in hexagonal problems. The central questions which arise in the design of these spaces are the degree of incompleteness permissible and the advantages of using a low-order space fine-mesh approach over that of a high-order space coarse-mesh one. There is also the question of the degree of smoothness required. Two schemes for the construction of spaces are described and a number of specific spaces, constructed with the questions outlined above in mind, are presented. They range from a complete non-Lagrangian, non-Hermite quadratic space to an incomplete ninth order space. Results are presented for two-dimensional problems typical of a small high temperature gas-cooled reactor. From the results it is concluded that the space used should at least include the complete linear one. Complete spaces are to be preferred to totally incomplete ones. Once function continuity is imposed any additional degree of smoothness is of secondary importance. For flux shapes typical of the small high temperature gas-cooled reactor the linear space fine-mesh alternative is to be preferred to the perturbation quadratic space coarse-mesh one and the low-order finite difference method is to be preferred over both finite element schemes

ArcGIS10.0 were calculated. The base of the comparison was isohyetal method, because it showed the relief and took into account the effect of rain gauges, therefore it could represent rainfall data and region condition completely. The most accurate method was isohyetal method in estimating mean precipitation. Cross-validation was usually used to compare the accuracy of interpolation method. In this study, root mean square error (RMSE was used as validation criteria. Meanwhile, in the present study, the effects of altitude were neglected for two reasons. First, partial correlation coefficient of rainfall/altitude gradients was weak and second, the storms data were not accessible. Conclusions: In this study, the estimation of areal rainfall by Galerkin’s method was an innovative step. The case study was Mashhad basin (9909 km2 which included 42 rain gauges. Comparing other methods indicated that: Galerkin’s method was more efficient in comparison with arithmetic mean and it had more accurate results. Result of Galerkin’s method was similar to Kriging, IDW and Thiessen method. Unlike other methods, mesh of finite element could be used for calculating runoff, sediment and temperature and it did not need station weights. Even within one network the number of interpolation points can be varied, so that in a rugged region the number can be increased with little increase in effort, while in a more uniform region fewer are necessary.

An approximate implementation of the multiconfiguration time-dependent Hartree-Fock method is proposed, in which the matrix of configuration-interaction coefficients is decomposed into a product of matrices of smaller dimension. The applicability of this method in which all the configurations are kept in the expansion of the wave function, while the configuration-interaction coefficients are approximately calculated, is discussed by showing the results on three model systems: a one-dimensional model of a beryllium atom, a one-dimensional model of a carbon atom, and a one-dimensional model of a chain of four hydrogen atoms. The time-dependent electronic dynamics induced by a few-cycle, long-wavelength laser pulse is found to be well described at a lower computational cost compared to the standard multiconfiguration time-dependent Hartree-Fock treatment. Drawbacks of the method are also discussed.

A knowledge in near real time, of the surface drag coefficient for drifting pack ice is vital for predicting its motions. And since this is not routinely available from measurements it must be replaced by estimates. Hence, a method for estimating this variable, as well as the drag coefficient at the water/ice interface and the ice thickness, for drifting open pack ice was developed. These estimates were derived from three-day sequences of LANDSAT-1 MSS images and surface weather charts and from the observed minima and maxima of these variables. The method was tested with four data sets in the southeastern Beaufort sea. Acceptable results were obtained for three data sets. Routine application of the method depends on the availability of data from an all-weather air or spaceborne remote sensing system, producing images with high geometric fidelity and high resolution.

The phenomenon of jump is one of the importantly external forms of hydrological variabi-lity under environmental changes, representing the adaption of hydrological nonlinear systems to the influence of external disturbances. Presently, the related studies mainly focus on the methods for identifying the jump positions and jump times in hydrological time series. In contrast, few studies have focused on the quantitative description and classification of jump degree in hydrological time series, which make it difficult to understand the environmental changes and evaluate its potential impacts. Here, we proposed a theatrically reliable and easy-to-apply method for the classification of jump degree in hydrological time series, using the correlation coefficient as a basic index. The statistical tests verified the accuracy, reasonability, and applicability of this method. The relationship between the correlation coefficient and the jump degree of series were described using mathematical equation by derivation. After that, several thresholds of correlation coefficients under different statistical significance levels were chosen, based on which the jump degree could be classified into five levels: no, weak, moderate, strong and very strong. Finally, our method was applied to five diffe-rent observed hydrological time series, with diverse geographic and hydrological conditions in China. The results of the classification of jump degrees in those series were closely accorded with their physically hydrological mechanisms, indicating the practicability of our method.

I use Monte Carlo simulations and phantom measurements to characterize a probe with adjacent optical fibres for diffuse reflectance spectroscopy during stereotactic surgery in the brain. Simulations and measurements have been fitted to a modified Beer-Lambert model for light transport in order to be able to quantify chromophore content based on clinically measured spectra in brain tissue. It was found that it is important to take the impact of the light absorption into account when calculating the apparent optical path length, lp, for the photons in order to get good estimates of the absorption coefficient, μa. The optical path length was found to be well fitted to the equation lp=a+b ln(Is)+c ln(μa)+d ln(Is)ln(μa), where Is is the reflected light intensity for scattering alone (i.e., zero absorption). Although coefficients a-d calculated in this study are specific to the probe used here, the general form of the equation should be applicable to similar probes.

The boundary-integral equation method is well suited for the calculation of the dynamic-stiffness matrix of foundations embedded in a layered visco-elastic halfspace (or a transmitting boundary of arbitrary shape), which represents an unbounded domain. It also allows pile groups to be analyzed, taking pile-soil-pile interaction into account. The discretization of this boundary-elementmethod is restricted to the structure-soil interface. All trial functions satisfy exactly the field equations and the radiation condition at infinity. In the indirect boundary-elementmethod distributed source loads of initially unknown intensities act on a source line located in the excavated part of the soil and are determined such that the prescribed boundary conditions on the structure-soil interface are satisfied in an average sense. In the two-dimensional case the variables are expanded in a Fourier integral in the wave number domain, while in three dimensions, Fourier series in the circumferential direction and bessel functions of the wave number domain, while in three dimensions, Fourier series in the circumferential direction and Bessel functions of the wave number in the radial direction are selected. Accurate results arise with a small number of parameters of the loads acting on a source line which should coincide with the structure-soil interface. In a parametric study the dynamic-stiffness matrices of rectangular foundations of various aspect ratios embedded in a halfplane and in a layer built-in at its base are calculated. For the halfplane, the spring coefficients for the translational directions hardly depend on the embedment, while the corresponding damping coefficients increase for larger embedments, this tendency being more pronounced in the horizontal direction. (orig.)

Full Text Available This paper shows the pattern of a 7.5 kW squirrel-cage induction motor’s electrical loss in balanced and unbalanced conditions, modelling the motor using the finite elementmethod and comparing the results with experimental data obtained in the laboratory for the selected motor. Magnetic flux density variation was analysed at four places in the machine. The results so obtained sho- wed that the undervoltage unbalanced condition was the most critical from the motor’s total loss point of view. Regarding varia- tion of loss in parts of the motor, a constant iron loss pattern was found when the load was changed for each type of voltage supply and that the place where the loss had the largest rise was in the machine’s rotor.

Recently, the stability of slopes during earthquakes has become to be an important engineering problem, especially in case of the earthquake-proof design of nuclear power plants. But, for fissured rock slopes, some problems are remained unresolved, because they can not be treated as continua. The authors have been investigating toppling failure of slopes, from a point of view which regards a fissured rock mass as an assemblage of rigid blocks. DEM (Distinct ElementMethod) proposed by Cundall (1974) seems to be very helpful to such a investigation. So, in this paper, the applicability of DEM to toppling failure of slopes is examined through the comparison between DEM results and theoretical or experimental results using 3 simple models. (author)

The work discussed here covers turbulent flow calculations using GALERKIN's finite-elementmethod. Turbulence effects on the mean field are taken into account by the k-epsilon model with two evolution equations: one for the kinetic energy of the turbulence, and one for the energy dissipation rate. The wall zone is covered by wall laws, and by REICHARDT's law in particular. A law is advanced for the epsilon input profile, and a numerical solution is proposed for the physically aberrant values of k and epsilon generated by the model. Single-equation models are reviewed comparatively with the k-epsilon model. A comparison between calculated and analytical solutions or calculated and experimental results is presented for decreasing turbulence behind a grid, for the flow between parallel flat plates with three REYNOLDS numbers, and for backward facing step. This part contains graphs and curves corresponding to results of the calculations presented in part one [fr

This paper develops a numerical solution to the radiative heat transfer problem coupled with conduction in an absorbing, emitting and isotropically scattering medium with the irregular geometries using the natural elementmethod (NEM). The walls of the enclosures, having temperature and mixed boundary conditions, are considered to be opaque, diffuse as well as gray. The NEM as a meshless method is a new numerical scheme in the field of computational mechanics. Different from most of other meshless methods such as element-free Galerkin method or those based on radial basis functions, the shape functions used in NEM are constructed by the natural neighbor interpolations, which are strictly interpolant and the essential boundary conditions can be imposed directly. The natural element solutions in dealing with the coupled heat transfer problem for the mixed boundary conditions have been validated by comparison with those from Monte Carlo method (MCM) generated by the authors. For the validation of the NEM solution to radiative heat transfer in the semicircular medium with an inner circle, the results by NEM have been compared with those reported in the literatures. For pure radiative transfer, the upwind scheme is employed to overcome the oscillatory behavior of the solutions in some conditions. The steady state and transient heat transfer problem combined with radiation and conduction in the semicircular enclosure with an inner circle are studied. Effects of various parameters such as the extinction coefficient, the scattering albedo, the conduction–radiation parameter and the boundary emissivity are analyzed on the radiative and conductive heat fluxes and transient temperature distributions.

In numerical models to simulate the dispersion of anthropogenic radionuclides in the marine environment, the sediment–seawater distribution coefficient (K{sub d}) for various elements is an important parameter. In coastal regions, K{sub d} values are largely dependent on hydrographic conditions and physicochemical characteristics of sediment. Here we report K{sub d} values for 36 elements (Na, Mg, Al, K, Ca, V, Mn, Fe, Co, Ni, Cu, Se, Rb, Sr, Y, Mo, Cd, I, Cs, rare earth elements, Pb, {sup 232}Th and {sup 238}U) in seawater and sediment samples from 19 Japanese coastal regions, and we examine the factors controlling the variability of these K{sub d} values by investigating their relationships to hydrographic conditions and sediment characteristics. There was large variability in K{sub d} values for Al, Mn, Fe, Co, Ni, Cu, Se, Cd, I, Pb and Th. Variations of K{sub d} for Al, Mn, Fe, Co, Pb and Th appear to be controlled by hydrographic conditions. Although K{sub d} values for Ni, Cu, Se, Cd and I depend mainly on grain size, organic matter content, and the concentrations of hydrous oxides/oxides of Fe and Mn in sediments, heterogeneity in the surface characteristics of sediment particles appears to hamper evaluation of the relative importance of these factors. Thus, we report a new approach to evaluate the factors contributing to variability in K{sub d} for an element. By this approach, we concluded that the K{sub d} values for Cu, Se, Cd and I are controlled by grain size and organic matter in sediments, and the K{sub d} value for Ni is dependent on grain size and on hydrous oxides/oxides of Fe and Mn. - Highlights: • K{sub d}s for 36 elements were determined in 19 Japanese coastal regions. • K{sub d}s for several elements appeared to be controlled by multiple factors in sediments. • We evaluated these factors based on physico-chemical characteristics of sediments.

Dissolved helium in groundwater is one of the most suitable tracers for the groundwater dating. The diffusion coefficients in aquitard and aquifer were important to estimate an accumulation of the helium in groundwater. However, few papers have been reported about the diffusion of helium in rocks. In this study, effective diffusion coefficients of the helium in sandstones and mudstone were determined using a through-diffusion method. The effective diffusion coefficients of helium were in the range of 1.5 x 10 -10 to 1.1 x 10 -9 m 2 s -1 and larger than those of Br - ions. Geometrical factors for the diffusion of helium were also larger than those for the diffusion of Br - ions. This fact suggests that diffusion path of helium in the rocks is not more restricted than that of Br - ions. The diffusion coefficients of helium were also estimated using the diffusion coefficient of helium in bulk water and formation factors for diffusion of Br - ions. The estimated diffusion coefficients of helium were larger than the effective diffusion coefficients. It is clarified that the effective diffusion coefficients of helium are underestimated by the estimation method using anions. (author)

In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.

An X-ray fluorescence method for determining trace elements in silicate rock samples was studied. The procedure focused on the application of the pertinent matrix corrections. Either the Compton peak or the reciprocal of the mass absorption coefficient of the sample was used as internal standard for this purpose. X-ray tubes with W or Cr anodes were employed, and the W Lβ and Cr Kα Compton intensities scattered by the sample were measured. The mass absorption coefficients at both sides of the absorption edge for Fe (1.658 and 1.936 A) were calculated. The elements Zr, Y, Rb, Zn, Ni, Cr and V were determined in 15 international reference rocks covering wide ranges of concentration. Relative mean errors were in many cases less than 10%. (author)

An efficient finite element (FE) formulation for the simulation of multibody systems is derived from Hamilton's principle. According to the classical assumptions of multibody systems, a large rotation formulation has been chosen, where large rotations and large displacements, but only small deformations of the single bodies are taken into account. The strain tensor is linearized with respect to a co-rotated frame. The present approach uses absolute coordinates for the degrees of freedom and forms an alternative to the floating frame of reference formulation that is based on relative coordinates and describes deformation with respect to a co-rotated frame. Due to the modified strain tensor, the present formulation distinguishes significantly from standard nodal based nonlinear FE methods. Constraints are defined in integral form for every pair of surfaces of two bodies. This leads to a small number of constraint equations and avoids artificial stress singularities. The resulting mass and stiffness matrices are constant apart from a transformation based on a single rotation matrix for each body. The particular structure of this transformation allows to prevent from the usually expensive factorization of the system Jacobian within implicit time--integration methods. The present method has been implemented and tested with the FE-package NGSolve and specific 3D examples are verified with a standard beam formulation

In this work, a new method to design TMS coils is presented. It is based on the inclusion of the concept of stream function of a quasi-static electric current into a boundary elementmethod. The proposed TMS coil design approach is a powerful technique to produce stimulators of arbitrary shape, and remarkably versatile as it permits the prototyping of many different performance requirements and constraints. To illustrate the power of this approach, it has been used for the design of TMS coils wound on rectangular flat, spherical and hemispherical surfaces, subjected to different constraints, such as minimum stored magnetic energy or power dissipation. The performances of such coils have been additionally described; and the torque experienced by each stimulator in the presence of a main magnetic static field have theoretically found in order to study the prospect of using them to perform TMS and fMRI concurrently. The obtained results show that described method is an efficient tool for the design of TMS stimulators, which can be applied to a wide range of coil geometries and performance requirements.

When relative frequencies of resource kinds in the diet are known, the competition coefficient giving the effect of competitor j on i may be computed as \\documentclass{aastex} \\usepackage{amsbsy} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{bm} \\usepackage{mathrsfs} \\usepackage{pifont} \\usepackage{stmaryrd} \\usepackage{textcomp} \\usepackage{portland,xspace} \\usepackage{amsmath,amsxtra} \\usepackage{wasysym} \\pagestyle{empty} \\DeclareMathSizes{10}{9}{7}{6} \\begin{document}$$\\alpha_{ij}=\\left(\\frac{T_{j}}{T_{i}}\\right)\\left[\\frac{{\\sum\\limits_{k=1}^{m}}(d_{ik}/f_{k})\\:(d_{jk}/f_{k})\\:b_{ik}}{\\sum\\limits_{k=1}^{m}(d_{ik}/f_{k})^{2}\\:b_{ik}}\\right],$$\\end{document} where T j /T i = the ratio of the number of items consumed by an individual of competitor j to that consumed by an individual of competitor i, measured over an interval of time that includes all regular fluctuations in consumption for both species; d ik = the frequency of resource k in the diet of competitor i (and similarly for d jk ); f k = the standing frequency of resource k in the environment; b ik = the net calories gained by an individual of competitor i from an item of resource k, or more approximately the calories contained in an item of resource k, or still more approximately the weight or volume of an item of resource k; and the summations are taken over all resources eaten by at least one of the competing species. The coefficient follows from MacArthur's (1968) consumer-resource system when the ratio of the carrying capacity to intrinsic rate of increase is constant for all resources. When relative frequencies of time spent foraging in habitat kinds are known, the competition coefficient may be computed as \\documentclass{aastex} \\usepackage{amsbsy} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{bm} \\usepackage{mathrsfs} \\usepackage{pifont} \\usepackage{stmaryrd} \\usepackage{textcomp} \\usepackage{portland,xspace} \\usepackage

The finite elementmethod is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt

This paper proposes a guideline for selection of element size and time increment by 3-D finite elementmethod, which is applied to elastic wave propagation analysis for a long distance of a large structure. An element size and a time increment are determined by quantitative evaluation of strain, which must be 0 on the analysis model with a uniform motion, caused by spatial and time discretization. (author)

Full Text Available In this paper, the cutting force calculation of ball-end mill processing was modeled mathematically. All derivations of cutting forces were directly based on the tangential, radial, and axial cutting force components. In the developed mathematical model of cutting forces, the relationship of average cutting force and the feed per flute was characterized as a linear function. The cutting force coefficient model was formulated by a function of average cutting force and other parameters such as cutter geometry, cutting conditions, and so on. An experimental method was proposed based on the stable milling condition to estimate the cutting force coefficients for ball-end mill. This method could be applied for each pair of tool and workpiece. The developed cutting force model has been successfully verified experimentally with very promising results.

Full Text Available Maximum power transfer tracking (MPTT is meant to track the maximum power point during the system operation of wireless power transfer (WPT systems. Traditionally, MPTT is achieved by impedance matching at the secondary side when the load resistance is varied. However, due to a loosely coupling characteristic, the variation of coupling coefficient will certainly affect the performance of impedance matching, therefore MPTT will fail accordingly. This paper presents an identification method of coupling coefficient for MPTT in WPT systems. Especially, the two-value issue during the identification is considered. The identification approach is easy to implement because it does not require additional circuit. Furthermore, MPTT is easy to realize because only two easily measured DC parameters are needed. The detailed identification procedure corresponding to the two-value issue and the maximum power transfer tracking process are presented, and both the simulation analysis and experimental results verified the identification method and MPTT.

To study nonlinear dynamics of charged particles in magnetic sector analyzers one applied the matriciant method. When calculating matriciants (transfer matrices) one took account of the boundary-value effects associated with the effect of scattering field, as well as, the higher harmonics of the sector magnetic field up to the third order inclusive. In case of the rectangular distribution of field components along the optical axis one obtained analytical expressions for all aberration coefficients up to the third order exclusive. To simulate the real field with the width of scattering field not equal to zero one applied smooth distribution of components for which calculation of similar aberration coefficients was conducted using the conservative numerical method [ru

The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface

Moderator temperature coefficient of reactivity is not monitored during fuel cycles in WWER reactors, because it is not very easy or impossible to measure it without disturbing the normal operation. Two new methods were tested in our WWER type nuclear power plant to try methodologies, which enable to measure that important to safety parameter during the fuel cycle. One is based on small perturbances, and only small changes are requested in operation, the other is based on noise methods, which means it is without interference with reactor operation. Both method is new that aspects that they uses the plant computer data(VERONA) based signals calculated by C P ORCA diffusion code (Authors)

In a multicomponent multiphase geochemical system undergoing a chemical reaction such as precipitation and/or dissolution, the partitioning of species between phases is determined by a combination of thermodynamic properties and transport processes. The interpretation of the observed distribution of trace elements requires models integrating coupled chemistry and mechanical transport. Here, a framework is presented that predicts the kinetic effects on the distribution of species between two reacting phases. Based on a perturbation theory combining Navier-Stokes fluid flow and chemical reactivity, the framework predicts rate-dependent partition coefficients in a variety of different systems. We present the theoretical framework, with applications to two systems: 1. species- and isotope-dependent Soret diffusion of species in a multicomponent silicate melt subjected to a temperature gradient, and 2. Elemental partitioning and isotope fractionation during precipitation of a multicomponent solid from a multicomponent liquid phase. Predictions will be compared with results from experimental studies. The approach has applications for understanding chemical exchange in at boundary layers such as the Earth's surface magmatic systems and at the core/mantle boundary.

Under the Black–Scholes model, the value of an American option solves a time dependent variational inequality problem (VIP). In this paper, first we discretize the variational inequality of American option in temporal direction by applying the Rannacher time stepping and achieve a sequence of elliptic variational inequalities. Second we discretize the spatial domain of variational inequalities by using spectral elementmethods with high order Lagrangian polynomials introduced on Gauss–Legendre–Lobatto points. Also by computing integrals by the Gauss–Legendre–Lobatto quadrature rule we derive a sequence of the linear complementarity problems (LCPs) having a positive definite sparse coefficient matrix. To find the unique solutions of the LCPs, we use the projected successive over-relaxation (PSOR) algorithm. Furthermore we present some existence and uniqueness theorems for the variational inequalities and LCPs. Finally, theoretical results are verified on the relevant numerical examples.

Full Text Available This article examines the stress-strain curves of various thicknesses of soft and hard wood when bent during three-point loading. The finite elementmethod was used to simulate the course of stresses that occurred during the bending of these materials. Reference curves obtained by bending real specimens offered a basis for simulation. The results showed that with increasing material thickness, deflection values decreased and the proportionality limit increased; eventually, the bendability coefficient value decreased and the loading force necessary for bending increased. Moreover, it was apparent when bending hard materials that higher loading forces were necessary for different materials of the same thickness. It is possible to determine the stress-strain curves without having to perform experiments (except for indispensable reference ones under real conditions.

We present an enhancement of the multiscale finite elementmethod (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.

A process for the insertion and exchange of the filter elements for suspended matter is performed from the clean-air-side. During the insertion of a filter element, a plastic tube (Which encircles the circumference of the filter element and which exceeds in its length the layer thickness of the filter element several times) is tightly connected in its middle section with the side walls, which side walls form a border around the filter element; and then the open end of the plastic tube, which faces the frame, is connected by way of a tight fit with a ring, which is actually known and which surrounds the orifice of the frame into which the filter element is inserted. The filter element is connected with the frame by means of tightening devices, and the outer free end of the tube is turned inside out and around the filter element for the purpose of unhindered air passage through the filter layer, that during the exchange of the contaminated filter element, the outer open end of the tube is heat sealed. The filter element is disconnected and removed from the frame by flipping down of the tightening devices, and the tube is heat sealed in the section between the filter element and the frame, and, that during the insertion of a new filter element, a new tube is attached by way of tight fitting to the ring of the frame , which tube is at its middle section tightly connected with the filter element, and which tube is attached to the ring of the frame in an actually known by overlapping of the heat-sealed tube rest. The tube rest is pulled onto the new tube and pulled off the ring, and the filter element is tightly connected with the frame by means of the tightening devices

The object of this work is to study techniques of measurement using the gamma ionisation chamber, making it possible either to measure the activities of radioactive sources, or to determine the specific emission coefficient γ (or the coefficient K) of a given radioelement. The ionisation chambers studied belong to two categories: graphites cavity-chambers, and 4 π γ chambers. For the cavity-chamber measurements, the different correction factors of which account must be taken have been calculated, in particular the geometric and hygrometric corrections. The absorption and auto-absorption corrections have led to the introduction of the notion of the 'effective energy γ' of a radioelement. In the case of 4 π γ chambers, it has been shown that appropriately shaped electrodes make it possible to improve their performances. One of the chambers described permits the measurement of β emitters using the associated Bremsstrahlung. In order to measure the K coefficient of some radioelements, it has been found useful a 4 π γ chamber with graphite walls, the measurement being carried out by comparison with a radium standard. The validity of the method was checked with radioelements for whom the K coefficient values are well-known ( 24 Na, 60 Co, 131 I, 198 Au). For other radioelements, the following values were obtained (expressed in r cm 3 mc -1 h -1 ): 51 Cr: 0,18; 56 Mn: 8,8; 65 Zn: 3,05; 124 Sb: 9,9; 134 Cs: 9,3; 137 Cs: 3,35; 141 Ce: 0,46; 170 Tm: 0,023; 192 Ir: 24,9; 203 Hg: 1,18; These values have been corrected for the contribution to the dose of the fluorescent radiation which may be emitted by the source, except in the case of Tm 170 . In the last part of this work, the performances of the different electro-metric devices used were compared. (author) [fr

Combustion Engineering Inc. designs its modern PWR reactor cores using open-core thermal-hydraulic methods where the mass, momentum and energy equations are solved in three dimensions (one axial and two lateral directions). The resultant fluid properties are used to compute the minimum Departure from Nuclear Boiling Ratio (DNBR) which ultimately sets the power capability of the core. The on-line digital monitoring and protection systems require a small fast-running algorithm of the design code. This paper presents two techniques used in the development of the on-line DNB algorithm. First, a three-dimensional transport coefficient model is introduced to radially group the flow subchannel into channels for the thermal-hydraulic fluid properties calculation. Conservation equations of mass, momentum and energy for this channels are derived using transport coefficients to modify the calculation of the radial transport of enthalpy and momentum. Second, a simplified, non-iterative numerical method, called the prediction-correction method, is applied together with the transport coefficient model to reduce the computer execution time in the determination of fluid properties. Comparison of the algorithm and the design thermal-hydraulic code shows agreement to within 0.65% equivalent power at a 95/95 confidence/probability level for all normal operating conditions of the PWR core. This algorithm accuracy is achieved with 1/800th of the computer processing time of its parent design code. (orig.)

have been written on this topic using a variety of methods. QMC methods are relatively new to this application area. I will consider different models for the randomness (uniform versus lognormal) and contrast different QMC algorithms (single-level

Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods

Several experimental methods for measuring porosity, bulk density and volume reduction during drying of foodstuff are available. These methods include among others geometric dimension, volume displacement, mercury porosimeter, micro-CT, and NMR. However, data on their accuracy, sensitivity, and