Sample records for integral method application

Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

Full Text Available Hierarchical (H- matrices method is a general mathematical framework providing a highly compact representation and efficient numerical arithmetic. When applied in integral-equation- (IE- based computational electromagnetics, H-matrices can be regarded as a fast algorithm; therefore, both the CPU time and memory requirement are reduced significantly. Its kernel independent feature also makes it suitable for any kind of integral equation. To solve H-matrices system, Krylov iteration methods can be employed with appropriate preconditioners, and direct solvers based on the hierarchical structure of H-matrices are also available along with high efficiency and accuracy, which is a unique advantage compared to other fast algorithms. In this paper, a novel sparse approximate inverse (SAI preconditioner in multilevel fashion is proposed to accelerate the convergence rate of Krylov iterations for solving H-matrices system in electromagnetic applications, and a group of parallel fast direct solvers are developed for dealing with multiple right-hand-side cases. Finally, numerical experiments are given to demonstrate the advantages of the proposed multilevel preconditioner compared to conventional “single level” preconditioners and the practicability of the fast direct solvers for arbitrary complex structures.

Full Text Available Boundary Integral Equation Method is used for solving analytically the problems of coupled thermoelastic spherical wave propagation. The resulting mathematical expressions coincide with the solutions obtained in a conventional manner.

Full Text Available Abstract. Our results suggest that the combined use of optical coherent tomography (OCT and fluorescence diagnosis helps to refine the nature and boundaries of the pathological process in the tissue of the colon in ulcerative colitis. Studies have shown that an integrated optical diagnostics allows us to differentiate lesions respectively to histology and to decide on the need for biopsy and venue. This method is most appropriate in cases difficult for diagnosis.

Corporate governance was created in recent decades and we can say that it is a new field of science. The most famous companies failed from day to day. Their failure and scandals had significant impact on local and international community. Finding of a new effective framework of level of corporate governance can help that the similar negative events wouldn't be repeated never again. The new approach in the corporate governance - an integrated framework, created for corporate governance is one ...

The thermohydraulic transient simulation of an entire LMFBR system is, by its very nature, complex. Physically, the entire plant consists of many subsystems which are coupled by various processes and/or components. The characteristic integration timesteps for these processes/components can vary over a wide range. To improve computing efficiency, a multiple timestep scheme (MTS) approach has been used in the development of the Super System Code (SSC). In this paper: (1) the partitioning of the system and the timestep control are described, and (2) results are presented showing a savings in computer running time using the MTS of as much as five times the time required using a single timestep scheme

In this article, an application of He's homotopy perturbation method is applied to solve systems of Volterra integral equations of the first kind. Some non-linear examples are prepared to illustrate the efficiency and simplicity of the method. Applying the method for linear systems is so easily that it does not worth to have any example.

Full Text Available Conjugate thermal explosion is an extension of the classical theory, proposed and studied recently by the author. The paper reports application of heat-balance integralmethod for developing phase portraits for systems undergoing conjugate thermal explosion. The heat-balance integralmethod is used as an averaging method reducing partical differential equation problem to the set of first-order ordinary differential equations. The latter reduced problem allows natural interpretation in appropriately chosen phase space. It is shown that, with the help of heat-balance integral technique, conjugate thermal explosion problem can be described with a good accuracy by the set of non-linear first-order differential equations involving complex error function. Phase trajectories are presented for typical regimes emerging in conjugate thermal explosion. Use of heat-balance integral as a spatial averaging method allows efficient description of system evolution to be developed.

We consider a path integral in the phase space possibly with an influence functional in it and we use a method based on the use of the central limit theorem on the phase of the path integral representation to extract an equivalent expression which can be used in numerical calculations. Moreover we give conditions under which we can extract closed analytical results. As a specific application we consider a general system of two coupled and forced harmonic oscillators with coupling of the form x 1 x α 2 and we derive the relevant sign solved propagator

Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses

Energy integration is a key solution in chemical process and crude refining industries to minimise external fuel consumption and to face the impact of growing energy crises. Typical energy integration projects can reach a reduction of heating fuels and cold utilities by up to 40% compared with original designs or existing installations. Pinch Analysis is a leading tool and regarded as an efficient method to increase energy efficiency and minimise fuel flow consumptions. It is valid for both natures of design, grassroots and retrofit situations. It can practically be applied to synthesise a HEN (heat exchanger network) or modify an existing preheat train for minimum energy consumption. Heat recovery systems or HENs are networks for exchanging heat between hot and cold process sources. All heat transferred from hot process sources into cold process sinks represent the scope for energy integration. On the other hand, energies required beyond this integrated amount are to be satisfied by external utilities. Graphical representations of Pinch Analysis, such as Composite and Grand Composite Curves are very useful for grassroots designs. Nevertheless, in retrofit situation the analysis is not adequate and besides it is graphically tedious to represent existing exchangers on such graphs. This research proposes a new graphical method for the analysis of heat recovery systems, applicable to HEN retrofit. The new graphical method is based on plotting temperatures of process hot streams versus temperatures of process cold streams. A new graph is constructed for representing existing HENs. For a given network, each existing exchanger is represented by a straight line, whose slope is proportional to the ratio of heat capacities and flows. Further, the length of each exchanger line is related to the heat flow transferred across this exchanger. This new graphical representation can easily identify exchangers across the pinch, Network Pinch, pinching matches and improper placement

The starting point for this report is the discrepancy reported in previous work between the reaction-diffusion calculations and the CEX-1 experiment, which involves storage of defected fuel elements in air at 150 deg C. This discrepancy is considerably diminished here by a more critical choice of theoretical parameters, and by taking into account the fact that different CEX-1 fuel elements were oxidized at very different rates and that the fuel element used previously for comparison with theoretical calculations actually underwent two limited-oxygen-supply cycles. Much better agreement is obtained here between the theory and the third, unlimited-air, storage period of the CEX-1 experiment. The approximate integralmethod is used extensively for the solution of the one-dimensional diffusion moving-boundary problems that may describe various storage periods of the CEX-1 experiment. In some cases it is easy to extend this method to arbitrary precision by using higher moments of the diffusion equation. Using this method, the validity of quasi-steady-state approximation is verified. Diffusion-controlled oxidation is also studied. In this case, for the unlimited oxygen supply, the integralmethod leads to an exact analytical solution for linear geometry, and to a good analytical approximation of the solution for the spherically symmetric geometry. These solutions may have some application in the analysis of experiments on the oxidation of small UO 2 fragments or powders when the individual UO 2 grains may be considered to be approximately spherical. (author). 23 refs., 5 tabs., 11 figs

educational applications of ISM could involve its use for instructional analysis or design, or for teaching students in the classroom; or ISM and IDM (a closely related, generalized 'integrated design method') could play valuable roles in a 'wide spiral' curriculum designed for the coordinated teaching of thinking skills, including creativity and critical thinking, across a wide range of subjects.

By using the methods of perturbation theory it is possible to construct simple formulae for the numerical integration of the Schroedinger equation, and also to calculate expectation values solely by means of simple eigenvalue calculations. (Auth.)

Various integral equation methods are described. For magnetostatic problems three formulations are considered in detail, (a) the direct solution method for the magnetisation distribution in permeable materials, (b) a method based on a scalar potential and (c) the use of an integral equation derived from Green's Theorem, i.e. the so-called Boundary IntegralMethod (BIM). In the case of (a) results are given for two-and three-dimensional non-linear problems with comparisons against measurement. For methods (b) and (c) which both lead to a more economic use of the computer than (a) some preliminary results are given for simple cases. For eddy current problems various methods are discussed and some results are given from a computer program based on a vector potential formulation. (author)

Various integral equation methods are described. For magnetostatic problems three formulations are considered in detail, (a) the direct solution method for the magnetisation distribution in permeable materials, (b) a method based on a scalar potential, and (c) the use of an integral equation derived from Green's Theorem, i.e. the so-called Boundary IntegralMethod (BIM). In the case of (a) results are given for two-and three-dimensional non-linear problems with comparisons against measurement. For methods (b) and (c), which both lead to a more economical use of the computer than (a), some preliminary results are given for simple cases. For eddy current problems various methods are discussed and some results are given from a computer program based on a vector potential formulation.

Over the last years, we have seen several security incidents that compromised system safety, of which some caused physical harm to people. Meanwhile, various risk assessment methods have been developed that integrate safety and security, and these could help to address the corresponding threats by implementing suitable risk treatment plans. However, an overarching overview of these methods, systematizing the characteristics of such methods, is missing. In this paper, we conduct a systematic l...

Nursing staff development programs must be responsive to current changes in healthcare. New nursing staff must be prepared to manage continuous change and to function competently in clinical practice. The orientation pathway, based on a case management model, is used as a structure for the orientation phase of staff development. The integrated case is incorporated as a teaching strategy in orientation. The integrated case method is based on discussion and analysis of patient situations with emphasis on role modeling and integration of theory and skill. The orientation pathway and integrated case teaching method provide a useful framework for orientation of new staff. Educators, preceptors and orientees find the structure provided by the orientation pathway very useful. Orientation that is developed, implemented and evaluated based on a case management model with the use of an orientation pathway and incorporation of an integrated case teaching method provides a standardized structure for orientation of new staff. This approach is designed for the adult learner, promotes conceptual reasoning, and encourages the social and contextual basis for continued learning.

Over the last years, we have seen several security incidents that compromised system safety, of which some caused physical harm to people. Meanwhile, various risk assessment methods have been developed that integrate safety and security, and these could help to address the corresponding threats by

The semiclassical approach for heavy ion reactions has become more and more important in analyzing rapidly accumulating data. The purpose of this paper is to lay a quantum-mechanical foundation of the conventional semiclassical treatments in heavy ion physics by using Feynman's path integralmethod on the basis of the second paper of Pechukas, and discuss simple consequences of the formalism.

The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

A method is described for calculating the electrodynamic characteristics of periodically corrugated waveguide systems. This method is based on representing the field as the solution of the Helmholtz vector equation in the form of a simple layer potential, transformed with the use of the Floquet conditions. Systems of compound integral equations based on a weighted vector function of the simple layer potential are derived for waveguides with azimuthally symmetric and helical corrugations. A numerical realization of the Fourier method is cited for seeking the dispersion relation of azimuthally symmetric waves of a circular corrugated waveguide

Contamination of soil and water and the resulting threat to public health and the environment are the frequent results of oil spills, leaks and other releases of gasoline, diesel fuel, heating oil and other petroleum products. Integrating an analytical groundwater solute transport model within its general framework, this paper proposes an integrated stochastic risk assessment method and ways to apply it to petroleum-contaminated sites. Both the analytical solute transport model and the general risk assessment framework are solved by the Monte Carlo simulation technique for approaching the theoretical output distribution. Results of this study show that the total cancer risk has approximately log-normal distribution, irrespective of the fact that a variety of distributions were used to define the related parameters. It is claimed that the method can improve the effectiveness of the risk assessment for subsurface, and provide useful result for site remediation decisions. 23 refs., 3 tabs., 4 figs

Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

Source anisotropy is a very important factor in brachytherapy quality assurance of high dose rate HDR Ir 192 afterloading stepping sources. If anisotropy is not taken into account then doses received by a brachytherapy patient in certain directions can be in error by a clinically significant amount. Experimental measurements of anisotropy are very labour intensive. We have shown that within acceptable limits of accuracy, Monte Carlo integration (MCI) of a modified Sievert integral (3D generalisation) can provide the necessary data within a much shorter time scale than can experiments. Hence MCI can be used for routine quality assurance schedules whenever a new design of HDR or PDR Ir 192 is used for brachytherapy afterloading. Our MCI calculation results are comparable with published experimental data and Monte Carlo simulation data for microSelectron and VariSource Ir 192 sources. We have shown not only that MCI offers advantages over alternative numerical integrationmethods, but also that treating filtration coefficients as radial distance-dependent functions improves Sievert integral accuracy at low energies. This paper also provides anisotropy data for three new Ir 192 sources, one for microSelectron-HDR and two for the microSelectron-PDR, for which data currently is not available. The information we have obtained in this study can be incorporated into clinical practice.

A new fault diagnosis method based on integrated neural networks for nuclear steam generator (SG) was proposed in view of the shortcoming of the conventional fault monitoring and diagnosis method. In the method, two neural networks (ANNs) were employed for the fault diagnosis of steam generator. A neural network, which was used for predicting the values of steam generator operation parameters, was taken as the dynamics model of steam generator. The principle of fault monitoring method using the neural network model is to detect the deviations between process signals measured from an operating steam generator and corresponding output signals from the neural network model of steam generator. When the deviation exceeds the limit set in advance, the abnormal event is thought to occur. The other neural network as a fault classifier conducts the fault classification of steam generator. So, the fault types of steam generator are given by the fault classifier. The clear information on steam generator faults was obtained by fusing the monitoring and diagnosis results of two neural networks. The simulation results indicate that employing integrated neural networks can improve the capacity of fault monitoring and diagnosis for the steam generator. (authors)

Objective To explore the application and the effect of the case based learning(CBL)method in clinical probation teaching of the integrated curriculum of hematology among eight-year-program medical students.Methods The CBL method was applied to the experimental group,and the traditional approach for the control group.After the lecture,a questionnaire survey was conducted to evaluate the teaching effect in the two groups.Results The CBL method efficiently increased the students’interest in learning and autonomous learning ability,enhanced their ability to solve clinical problems with basic theoretic knowledge and cultivated their clinical thinking ability.Conclusion The CBL method can improve the quality of clinical probation teaching of the integrated curriculum of hematology among eight-year-program medical students.

The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of

The asymptotics of the statistical model sum of the Dicke-type (Z/Z 6 ) is obtained and strictly proved at large N (N is an atomic number; Z is a statistical model sum; Z 0 is a statistical free system sum) using the functional integrationmethod. The model with one bose-field mode is considered. A detailed proof is carried out at T > Tsub(c). An idea of the proof is planned and asymptotic formulae are presented for T < Tsub(c) and in the vicinity of Tsub(c)

[Objective] Based on the water quality historical data from the Zhangze Reservoir from the last five years, the water quality was assessed by the integrated water quality identification index method and the Nemerow pollution index method. The results of different evaluation methods were analyzed and compared and the characteristics of each method were identified.[Methods] The suitability of the water quality assessment methods were compared and analyzed, based on these results.[Results] the water quality tended to decrease over time with 2016 being the year with the worst water quality. The sections with the worst water quality were the southern and northern sections.[Conclusion] The results produced by the traditional Nemerow index method fluctuated greatly in each section of water quality monitoring and therefore could not effectively reveal the trend of water quality at each section. The combination of qualitative and quantitative measures of the comprehensive pollution index identification method meant it could evaluate the degree of water pollution as well as determine that the river water was black and odorous. However, the evaluation results showed that the water pollution was relatively low.The results from the improved Nemerow index evaluation were better as the single indicators and evaluation results are in strong agreement; therefore the method is able to objectively reflect the water quality of each water quality monitoring section and is more suitable for the water quality evaluation of the reservoir.

Full Text Available Purpose: to determine the effect of complex application procedures Bodyflex and Pilates using information and communication technology on the level of psycho-physiological capabilities of students. Material: the study involved 46 university students. Research methods - physiological (speed detection of simple and complex reactions in different modes of testing, the level of functional mobility and strength of the nervous system, pedagogical experiment, methods of mathematical statistics. Results: the positive effect on the level of the developed technique psychophysiological capacities of students. The application of the developed technique in the experimental group showed a significant decrease in the latency time of a simple visual-motor reaction time latent complex visual-motor reaction time test run "level of functional mobility of nervous processes" in feedback mode. Found that the use of Bodyflex and Pilates promotes strength of nervous processes. Conclusions: the recommended use in the learning process of students of complex techniques of Pilates Bodyflex using information and communication technologies, increased levels of psychophysiological features, mobility and strength of the nervous processes.

Full Text Available Abstract Background In the context of systems biology, few sparse approaches have been proposed so far to integrate several data sets. It is however an important and fundamental issue that will be widely encountered in post genomic studies, when simultaneously analyzing transcriptomics, proteomics and metabolomics data using different platforms, so as to understand the mutual interactions between the different data sets. In this high dimensional setting, variable selection is crucial to give interpretable results. We focus on a sparse Partial Least Squares approach (sPLS to handle two-block data sets, where the relationship between the two types of variables is known to be symmetric. Sparse PLS has been developed either for a regression or a canonical correlation framework and includes a built-in procedure to select variables while integrating data. To illustrate the canonical mode approach, we analyzed the NCI60 data sets, where two different platforms (cDNA and Affymetrix chips were used to study the transcriptome of sixty cancer cell lines. Results We compare the results obtained with two other sparse or related canonical correlation approaches: CCA with Elastic Net penalization (CCA-EN and Co-Inertia Analysis (CIA. The latter does not include a built-in procedure for variable selection and requires a two-step analysis. We stress the lack of statistical criteria to evaluate canonical correlation methods, which makes biological interpretation absolutely necessary to compare the different gene selections. We also propose comprehensive graphical representations of both samples and variables to facilitate the interpretation of the results. Conclusion sPLS and CCA-EN selected highly relevant genes and complementary findings from the two data sets, which enabled a detailed understanding of the molecular characteristics of several groups of cell lines. These two approaches were found to bring similar results, although they highlighted the same

Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integrationmethods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

The term 'environmentally damaging subsidies' covers all sorts of direct and indirect subsidies with negative consequences for the environment. This article presents a method to determine the environmental impact of these subsidies. It combines a microeconomic framework with an environmental impact

Accurate integration of reflection intensities plays an essential role in structure determination of the crystallized compound. A new diffraction data integrationmethod, EVAL15, is presented in this thesis. This method uses the principle of general impacts to predict ab inito three-dimensional

Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

In the EC-funded project RENEB (Realizing the European Network in Biodosimetry), physical methods applied to fortuitous dosimetric materials are used to complement biological dosimetry, to increase dose assessment capacity for large-scale radiation/nuclear accidents. This paper describes the work performed to implement Optically Stimulated Luminescence (OSL) and Electron Paramagnetic Resonance (EPR) dosimetry techniques. OSL is applied to electronic components and EPR to touch-screen glass from mobile phones. To implement these new approaches, several blind tests and inter-laboratory comparisons (ILC) were organized for each assay. OSL systems have shown good performances. EPR systems also show good performance in controlled conditions, but ILC have also demonstrated that post-irradiation exposure to sunlight increases the complexity of the EPR signal analysis. Physically-based dosimetry techniques present high capacity, new possibilities for accident dosimetry, especially in the case of large-scale events. Some of the techniques applied can be considered as operational (e.g. OSL on Surface Mounting Devices [SMD]) and provide a large increase of measurement capacity for existing networks. Other techniques and devices currently undergoing validation or development in Europe could lead to considerable increases in the capacity of the RENEB accident dosimetry network.

Accident sequences which lead to severe core damage and to possible radioactive fission products into the environment have a very low probability. However, the interest in this area increased significantly due to the occurrence of the small break loss-of-coolant accident at TMI-2 which led to partial core damage, and of the Chernobyl accident in the former USSR which led to extensive core disassembly and significant release of fission products over several countries. In particular, the latter accident raised the international concern over the potential consequences of severe accidents in nuclear reactor systems. One of the significant shortcomings in the analyses of severe accidents is the lack of well-established and reliable scaling criteria for various multiphase flow phenomena. However, the scaling criteria are essential to the severe accident, because the full scale tests are basically impossible to perform. They are required for (1) designing scaled down or simulation experiments, (2) evaluating data and extrapolating the data to prototypic conditions, and (3) developing correctly scaled physical models and correlations. In view of this, a new scaling method is developed for the analysis of severe accidents. Its approach is quite different from the conventional methods. In order to demonstrate its applicability, this new stepwise integral scaling method has been applied to the analysis of the corium dispersion problem in the direct containment heating. ((orig.))

A general method is developed combining fast direct methods and boundary integral equation methods to solve Poisson's equation on irregular exterior regions. The method requires O(N log N) operations where N is the number of grid points. Error estimates are given that hold for regions with corners and other boundary irregularities. Computational results are given in the context of computational aerodynamics for a two-dimensional lifting airfoil. Solutions of boundary integral equations for lifting and nonlifting aerodynamic configurations using preconditioned conjugate gradient are examined for varying degrees of thinness.

Keeping the style, content, and focus that made the first edition a bestseller, Integral Transforms and their Applications, Second Edition stresses the development of analytical skills rather than the importance of more abstract formulation. The authors provide a working knowledge of the analytical methods required in pure and applied mathematics, physics, and engineering. The second edition includes many new applications, exercises, comments, and observations with some sections entirely rewritten. It contains more than 500 worked examples and exercises with answers as well as hints to selecte

We give a survey of common strategies for numerical integration (adaptive, Monte-Carlo, Quasi-Monte Carlo), and attempt to delineate their realm of applicability. The inherent accuracy and error bounds for basic integrationmethods are given via such measures as the degree of precision of cubature rules, the index of a family of lattice rules, and the discrepancy of uniformly distributed point sets. Strategies incorporating these basic methods often use paradigms to reduce the error by, e.g., increasing the number of points in the domain or decreasing the mesh size, locally or uniformly. For these processes the order of convergence of the strategy is determined by the asymptotic behavior of the error, and may be too slow in practice for the type of problem at hand. For certain problem classes we may be able to improve the effectiveness of the method or strategy by such techniques as transformations, absorbing a difficult part of the integrand into a weight function, suitable partitioning of the domain, transformations and extrapolation or convergence acceleration. Situations warranting the use of these techniques (possibly in an 'automated' way) are described and illustrated by sample applications

Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531

This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

Full Text Available In this paper we consider approximate travelling wave solutions to the Korteweg-de Vries equation. The heat-balance integralmethod is first applied to the problem, using two different quartic approximating functions, and then the refined integralmethod is investigated. We examine two types of solution, chosen by matching the wave speed to that of the exact solution and by imposing the same area. The first set of solutions is generally better with an error that is fixed in time. The second set of solutions has an error that grows with time. This is shown to be due to slight discrepancies in the wave speed.

Full Text Available For manufacturing companies to succeed in today's unstable economic environment, it is necessary to restructure the main components of its activities: designing innovative product, production using modern reconfigurable manufacturing systems, a business model that takes into account the global strategy and management methods using modern management models and tools. The first three components are discussed in numerous publications, for example, (Koren, 2010 and is therefore not considered in the article. A large number of publications devoted to the methods and tools of production management, for example (Halevi, 2007. On the basis of what was said in the article discusses the possibility of the integration of only three methods have received in recent years, the most widely used, namely: Six Sigma method - SS (George et al., 2005 and supplements its-Design for six sigm? - DFSS (Taguchi, 2003; Lean production transformed with the development to the "Lean management" and further to the "Lean thinking" - Lean (Hirano et al., 2006; Theory of Constraints, developed E.Goldratt - TOC (Dettmer, 2001. The article investigates some aspects of this integration: applications in diverse fields, positive features, changes in management structure, etc.

Objective. In this study, we evaluated patient care communication in the integrated care setting of children with cerebral palsy in three Dutch regions in order to identify relevant communication gaps experienced by both parents and involved professionals. - Design. A three-step mixed method

This thesis presents new developments of the Integrated Stress Determination Method (ISDM) with application to the Aespoe Hard Rock Laboratory (HRL), Oskarshamn, Sweden. The new developments involve a 12-parameter representation of the regional stress field in the rock mass. The method is applicable to data from hydraulic fracturing, hydraulic tests on pre-existing fractures (HTPF), and overcoring data from CSIR- and CSIRO-type of devices. When hydraulic fracturing/HTPF data are combined with overcoring data, the former may be used to constrain the elastic parameters, i.e. the problem involves 14 model parameters. The Swedish Nuclear Fuel and Waste Management Co. (SKB), have conducted a vast amount of rock stress measurements at the Aespoe HRL since the late 1980s. However, despite the large number of stress measurement data collected in this limited rock volume, variability in the stress field exists. Not only does the result vary depending on measuring technique, e.g. overcoring data indicated larger stress magnitudes compared to hydraulic fracturing data; the results are also affected by existing discontinuities, indicated by non-linear stress magnitudes and orientations versus depth. The objectives for this study are therefore threefold: (1) find explanations to the observed differences between existing hydraulic and overcoring stress data at the Aspo HRL; (2) explain the non-linear stress distribution indicated by existing stress data; and (3) apply the ISDM, including the new developments, based on the results obtained in step 1 and 2. To evaluate the observed differences between existing hydraulic and overcoring stress data, a detailed re-interpretation was conducted. Several measurement-related uncertainties were identified and corrected for when possible, which effectively reduced the discrepancies between the hydraulic and overcoring measuring results. Modeling studies managed by SKB have shown that the redistribution of the stresses at Aespoe HRL to a

This text/reference is a detailed look at the development and use of integral equation methods for electromagnetic analysis, specifically for antennas and radar scattering. Developers and practitioners will appreciate the broad-based approach to understanding and utilizing integral equation methods and the unique coverage of historical developments that led to the current state-of-the-art. In contrast to existing books, Integral Equation Methods for Electromagnetics lays the groundwork in the initial chapters so students and basic users can solve simple problems and work their way up to the mo

to be discretized for the calculation of gravity field. This was especially significant in the modeling and inversion of gravity data for determining the depth to the basement. Another important result was developing a novel method of inversion of gravity data to recover the depth to basement, based on the 3D...... Cauchy-type integral representation. Our numerical studies determined that the new method is much faster than conventional volume discretization method to compute the gravity response. Our synthetic model studies also showed that the developed inversion algorithm based on Cauchy-type integral is capable......One of the most important applications of gravity surveys in regional geophysical studies is determining the depth to basement. Conventional methods of solving this problem are based on the spectrum and/or Euler deconvolution analysis of the gravity field and on parameterization of the earth...

The main purpose of this paper is to establish the high temperature structural integrity evaluating procedures for the next generation reactors, which are to be operated at over 500 .deg. C and for 60 years. To do this, comparison studies of the high temperature structural design codes and assessment procedures such as the ASME-NH (USA), RCC-MR (France), DDS (Japan), and R5 (UK) are carried out in view of the accumulated inelastic strain and the creep-fatigue damage evaluations. Also the application procedures of the ASME-NH rules with the actual thermal and structural analysis results are described in detail. To overcome the complexity and the engineering costs arising from a real application of the ASME-NH rules by hand, all the procedures established in this study such as the time-dependent primary stress limits, total accumulated creep ratcheting strain limits, and the creep-fatigue damage limits are computerized and implemented into the SIE ASME-NH program. Using this program, the selected high temperature structures subjected to two cycle types are evaluated and the parametric studies for the effects of the time step size, primary load, number of cycles, normal temperature for the creep damage evaluations and the effects of the load history on the creep ratcheting strain calculations are investigated

This paper is Part 3 in a three part series of papers addressing operational techniques for applying mass integration principles to design in industry with special focus on water conservation and wastewater reduction. The presented techniques derive from merging US and Danish experience with indu......This paper is Part 3 in a three part series of papers addressing operational techniques for applying mass integration principles to design in industry with special focus on water conservation and wastewater reduction. The presented techniques derive from merging US and Danish experience......’s experience with defining the scope of the system and with identifying water flow constraints and water quality constraints is discussed. It is shown, how physical constraints for the system design often set a limit for the sophistication of the water recycle network and thereby also a limit for how...... sophisticated the method for system design should be. Finally, pinch analysis and system designs for water recycling in a practical case study are shown, documenting large water saving potentials and achievements....

A comparison is made between SnO2, ZnO, and TiO2 single-crystal nanowires and SnO2 polycrystalline nanofibers for gas sensing. Both nanostructures possess a one-dimensional morphology. Different synthesis methods are used to produce these materials: thermal evaporation-condensation (TEC), controlled oxidation, and electrospinning. Advantages and limitations of each technique are listed. Practical issues associated with harvesting, purification, and integration of these materials into sensing devices are detailed. For comparison to the nascent form, these sensing materials are surface coated with Pd and Pt nanoparticles. Gas sensing tests, with respect to H2, are conducted at ambient and elevated temperatures. Comparative normalized responses and time constants for the catalyst and noncatalyst systems provide a basis for identification of the superior metal-oxide nanostructure and catalyst combination. With temperature-dependent data, Arrhenius analyses are made to determine an activation energy for the catalyst-assisted systems.

In many fields of application of mathematics, progress is crucially dependent on the good flow of information between (i) theoretical mathematicians looking for applications, (ii) mathematicians working in applications in need of theory, and (iii) scientists and engineers applying mathematical models and methods. The intention of this book is to stimulate this flow of information. In the first three chapters (accessible to third year students of mathematics and physics and to mathematically interested engineers) applications of Abel integral equations are surveyed broadly including determination of potentials, stereology, seismic travel times, spectroscopy, optical fibres. In subsequent chapters (requiring some background in functional analysis) mapping properties of Abel integral operators and their relation to other integral transforms in various function spaces are investi- gated, questions of existence and uniqueness of solutions of linear and nonlinear Abel integral equations are treated, and for equatio...

The finite-difference based integrationmethod for evolution-line equations is discussed in detail and framed within the general context of the evolution operator picture. Exact analytical methods are described to solve evolution-like equations in a quite general physical context. The numerical technique based on the factorization formulae of exponential operator is then illustrated and applied to the evolution-operator in both classical and quantum framework. Finally, the general view to the finite differencing schemes is provided, displaying the wide range of applications from the classical Newton equation of motion to the quantum field theory.

The nodal integralmethod (NIM) has been developed for several problems, including the Navier-Stokes equations, the convection-diffusion equation, and the multigroup neutron diffusion equations. The coarse-mesh efficiency of the NIM is not fully realized in problems characterized by a wide range of spatial scales. However, the combination of adaptive mesh refinement (AMR) capability with the NIM can recover the coarse mesh efficiency by allowing high degrees of resolution in specific localized areas where it is needed and by using a lower resolution everywhere else. Furthermore, certain features of the NIM can be fruitfully exploited in the application of the AMR process. In this paper, we outline a general approach to couple nodal schemes with AMR and then apply it to the convection-diffusion (energy) equation. The development of the NIM with AMR capability (NIMAMR) is based on the well-known Berger-Oliger method for structured AMR. In general, the main components of all AMR schemes are 1. the solver; 2. the level-grid hierarchy; 3. the selection algorithm; 4. the communication procedures; 5. the governing algorithm. The first component, the solver, consists of the numerical scheme for the governing partial differential equations and the algorithm used to solve the resulting system of discrete algebraic equations. In the case of the NIM-AMR, the solver is the iterative approach to the solution of the set of discrete equations obtained by applying the NIM. Furthermore, in the NIM-AMR, the level-grid hierarchy (the second component) is based on the Hierarchical Adaptive Mesh Refinement (HAMR) system,6 and hence, the details of the hierarchy are omitted here. In the selection algorithm, regions of the domain that require mesh refinement are identified. The criterion to select regions for mesh refinement can be based on the magnitude of the gradient or on the Richardson truncation error estimate. Although an excellent choice for the selection criterion, the Richardson

Full Text Available The measurement and evaluation of performance are critical for the efficient and effective functioning of the economic system, because this allows for the analysis of the extent to which the defined objectives are achieved. Organizational performance is measured by different methods, both quantitative and qualitative. Many of the known methods for the evaluation and measurement of organizational performance take into account only financial indicators, while ignoring the non-financial ones. The integration of both indicators, through the combined application of multiple methods and the comparison of their results, should provide a more complete and objective picture of organizational performance. The Analytic Hierarchy Process (AHP is a formal framework for solving complex decision-making problems, as well as a systemic procedure for the hierarchical presentation of the problem elements. The Data Envelopment Analysis (DEA is a non-parametric approach based on linear programming, which allows for the calculation of the efficiency of decision-making units within a group of organizations. The work is an illustration of the method and framework of the combined use of the multi-criteria analysis methods for the measurement and evaluation of the performance of higher education institutions in the Republic of Serbia. The advantages of this approach are reflected in overcoming the shortcomings of a partial application of the AHP and the DEA methods by utilizing a new, hybrid, DEAHP (Data Envelopment Analytic Hierarchy Process method. Performance evaluation through an integratedapplication of the AHP and the DEA methods provides more objective results and more reliable solutions to the observed problem, thus creating a valuable information base for high-quality strategic decision making in higher education institutions, both at the national level and at the level of individual institutions.

This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an

Full Text Available Traditionally, asphalt pavements are considered as linear elastic materials in finite element (FE method to save computational time for engineering design. However, asphalt mixture exhibits linear viscoelasticity at small strain and low temperature. Therefore, the results derived from the elastic analysis will inevitably lead to discrepancies from reality. Currently, several FE programs have already adopted viscoelasticity, but the high hardware demands and long execution times render them suitable primarily for research purposes. Semianalytical finite element method (SAFEM was proposed to solve the abovementioned problem. The SAFEM is a three-dimensional FE algorithm that only requires a two-dimensional mesh by incorporating the Fourier series in the third dimension, which can significantly reduce the computational time. This paper describes the development of SAFEM to capture the viscoelastic property of asphalt pavements by using a recursive formulation. The formulation is verified by comparison with the commercial FE software ABAQUS. An application example is presented for simulations of creep deformation of the asphalt pavement. The investigation shows that the SAFEM is an efficient tool for pavement engineers to fast and reliably predict asphalt pavement responses; furthermore, the SAFEM provides a flexible, robust platform for the future development in the numerical simulation of asphalt pavements.

Full Text Available Source apportionment of river water pollution is critical in water resource management and aquatic conservation. Comprehensive application of various GIS-based multivariate statistical methods was performed to analyze datasets (2009–2011 on water quality in the Liao River system (China. Cluster analysis (CA classified the 12 months of the year into three groups (May–October, February–April and November–January and the 66 sampling sites into three groups (groups A, B and C based on similarities in water quality characteristics. Discriminant analysis (DA determined that temperature, dissolved oxygen (DO, pH, chemical oxygen demand (CODMn, 5-day biochemical oxygen demand (BOD5, NH4+–N, total phosphorus (TP and volatile phenols were significant variables affecting temporal variations, with 81.2% correct assignments. Principal component analysis (PCA and positive matrix factorization (PMF identified eight potential pollution factors for each part of the data structure, explaining more than 61% of the total variance. Oxygen-consuming organics from cropland and woodland runoff were the main latent pollution factor for group A. For group B, the main pollutants were oxygen-consuming organics, oil, nutrients and fecal matter. For group C, the evaluated pollutants primarily included oxygen-consuming organics, oil and toxic organics.

For many years, the subject of functional equations has held a prominent place in the attention of mathematicians. In more recent years this attention has been directed to a particular kind of functional equation, an integral equation, wherein the unknown function occurs under the integral sign. The study of this kind of equation is sometimes referred to as the inversion of a definite integral. While scientists and engineers can already choose from a number of books on integral equations, this new book encompasses recent developments including some preliminary backgrounds of formulations of integral equations governing the physical situation of the problems. It also contains elegant analytical and numerical methods, and an important topic of the variational principles. Primarily intended for senior undergraduate students and first year postgraduate students of engineering and science courses, students of mathematical and physical sciences will also find many sections of direct relevance. The book contains eig...

Introduced two methods composed of AMT and high-precision ground magnetic survey were used to the exploration of granite uranium deposits in the Yin gongshan areas middle part of the Nei Monggol. Through experiment of methods and analysis of applicated results, think that AMT have good vertical resolution and could preferably survey thickness of rockmass, position of fracture and deep conditions, space distribution features of fracture zone ect, but it is not clear for rockmass, xenolith of reflection. And high-precision ground magnetic survey could delineate rockmass, xenolith of distribution range and identify the rock contact zone, fracture ect, but it generally measure position and it is not clear for occurrence, extension. That can resolve some geological structures by using the integratedmethods and on the basis of sharing their complementary advantages. Effective technological measures are provided to the exploration of deep buried uranium bodies in the granite uranium deposits and outskirt extension of the deposit. (authors)

Full Text Available We introduce the Aumann fuzzy improper integral to define the convolution product of a fuzzy mapping and a crisp function in this paper. The Laplace convolution formula is proved in this case and used to solve fuzzy integro-differential equations with kernel of convolution type. Then, we report and correct an error in the article by Salahshour et al. dealing with the same topic.

In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.

This paper reviews various issues in the integration of applications with a building model... (Truncated.)......This paper reviews various issues in the integration of applications with a building model... (Truncated.)...

Traditional textile materials can be transformed into functional electronic components upon being dyed or coated with films of intrinsically conducting polymers, such as poly(aniline), poly(pyrrole) and poly(3,4-ethylenedioxythiophene). A variety of textile electronic devices are built from the conductive fibers and fabrics thus obtained, including: physiochemical sensors, thermoelectric fibers/fabrics, heated garments, artificial muscles and textile supercapacitors. In all these cases, electrical performance and device ruggedness is determined by the morphology of the conducting polymer active layer on the fiber or fabric substrate. Tremendous variation in active layer morphology can be observed with different coating or dyeing conditions. Here, we summarize various methods used to create fiber- and fabric-based devices and highlight the influence of the coating method on active layer morphology and device stability.

Full Text Available The current article points out some of the tasks and challenges companies must face in order to integrate their computerized systems and applications and then to place them on the Web. Also, the article shows how the Java 2 Enterprise Edition Platform and architecture helps the Web integration of applications. By offering standardized integration contracts, J2EE Platform allows application servers to play a key role in the process of Web integration of the applications.

The development of a new risk monitor system is introduced in this paper, which can be applied not only to severe accident prevention in daily operation but also to serve as to mitigate the radiological hazard just after severe accident happens and long term management of post-severe accident consequences. The summary of the fundamental method is summarized on how to configure the Plant Defense in-Depth (Did) Risk Monitor by object-oriented software system based on functional modeling approach. Following the authors??preceding preliminary study for AP1000, the way of realizing the proposed method of configuring the plant Did risk monitor was investigated for a safety-enhanced Japanese PWR design to meet with the tight anti-severe accident requirements set by national regulation in Japan after Fukushima Daiichi accident. The result of this example practice of the presented preliminary study for Japanese PWR was for the level 4 of the Did in case of beyond design basis accident, that is, loss of all AC power + RCP seal LOCA, against the former case of AP1000 for level 3 Did in case of large LOCA.

This work deals with continuous integration of web applications, especially those in PHP language. The main objective is the selection of the server for continuous integration, its deployment and configuration for continuous integration of PHP web applications. The first chapter describes the concept of continuous integration and its individual techniques. The second chapter deals with the choice of server for continuous integration and its basic settings. The third chapter contains an overvi...

A modern presentation of integralmethods in low-frequency electromagnetics This book provides state-of-the-art knowledge on integralmethods in low-frequency electromagnetics. Blending theory with numerous examples, it introduces key aspects of the integralmethods used in engineering as a powerful alternative to PDE-based models. Readers will get complete coverage of: The electromagnetic field and its basic characteristics An overview of solution methods Solutions of electromagnetic fields by integral expressions Integral and integrodifferential methods

The application of environmental strategies requires scoring and evaluation methods that provide an integrated vision of the economic and environmental performance of systems. The vector optimisation, ratio and weighted addition of indicators are the three most prevalent techniques for addressing

The paper deals with the investigation of applications of the method of noncommutative integration of linear differential equations by partial derivatives. Nontrivial example was taken for integration of three-dimensions wave equation with the use of non-Abelian quadratic algebras

We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

This book presents the results of discussions and presentation from the latest ISDT event (2014) which was dedicated to the 94th birthday anniversary of Prof. Lotfi A. Zade, father of Fuzzy logic. The book consists of three main chapters, namely: Chapter 1: Integrated Systems Design Chapter 2: Knowledge, Competence and Business Process Management Chapter 3: Integrated Systems Technologies Each article presents novel and scientific research results with respect to the target goal of improving our common understanding of KT integration.

This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

SOCIAL is an advanced, object-oriented development tool for integrating intelligent and conventional applications across heterogeneous hardware and software platforms. SOCIAL defines a family of 'wrapper' objects called agents, which incorporate predefined capabilities for distributed communication and control. Developers embed applications within agents and establish interactions between distributed agents via non-intrusive message-based interfaces. This paper describes a predefined SOCIAL agent that is specialized for integrating C Language Integrated Production System (CLIPS)-based applications. The agent's high-level Application Programming Interface supports bidirectional flow of data, knowledge, and commands to other agents, enabling CLIPS applications to initiate interactions autonomously, and respond to requests and results from heterogeneous remote systems. The design and operation of CLIPS agents are illustrated with two distributed applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and mapping problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.

Highlights: • The integrated framework that combines IDA with energy-saving potential method is proposed. • Energy saving analysis and management framework of complex chemical processes is obtained. • This proposed method is efficient in energy optimization and carbon emissions of complex chemical processes. - Abstract: Energy saving and management of complex chemical processes play a crucial role in the sustainable development procedure. In order to analyze the effect of the technology, management level, and production structure having on energy efficiency and energy saving potential, this paper proposed a novel integrated framework that combines index decomposition analysis (IDA) with energy saving potential method. The IDA method can obtain the level of energy activity, energy hierarchy and energy intensity effectively based on data-drive to reflect the impact of energy usage. The energy saving potential method can verify the correctness of the improvement direction proposed by the IDA method. Meanwhile, energy efficiency improvement, energy consumption reduction and energy savings can be visually discovered by the proposed framework. The demonstration analysis of ethylene production has verified the practicality of the proposed method. Moreover, we can obtain the corresponding improvement for the ethylene production based on the demonstration analysis. The energy efficiency index and the energy saving potential of these worst months can be increased by 6.7% and 7.4%, respectively. And the carbon emissions can be reduced by 7.4–8.2%.

A study is being conducted to develop and analyze alternative methods for testing of containment integrity. The study is focused on techniques for continuously monitoring containment integrity to provide rapid detection of existing leaks, thus providing greater certainty of the integrity of the containment at any time. The study is also intended to develop techniques applicable to the currently required Type A integrated leakage rate tests. A brief discussion of the range of alternative methods currently being considered is presented. The methods include applicability to all major containment types, operating and shutdown plant conditions, and quantitative and qualitative leakage measurements. The techniques are analyzed in accordance with the current state of knowledge of each method. The bulk of the techniques discussed are in the conceptual stage, have not been tested in actual plant conditions, and are presented here as a possible future direction for evaluating containment integrity. Of the methods considered, no single method provides optimum performance for all containment types. Several methods are limited in the types of containment for which they are applicable. The results of the study to date indicate that techniques for continuous monitoring of containment integrity exist for many plants and may be implemented at modest cost

Describing state-of-the-art solutions in distributed system architectures, Integration of Services into Workflow Applications presents a concise approach to the integration of loosely coupled services into workflow applications. It discusses key challenges related to the integration of distributed systems and proposes solutions, both in terms of theoretical aspects such as models and workflow scheduling algorithms, and technical solutions such as software tools and APIs.The book provides an in-depth look at workflow scheduling and proposes a way to integrate several different types of services

A book of techniques and applications, this text defines the path integral and illustrates its uses by example. It is suitable for advanced undergraduates and graduate students in physics; its sole prerequisite is a first course in quantum mechanics. For applications requiring specialized knowledge, the author supplies background material.The first part of the book develops the techniques of path integration. Topics include probability amplitudes for paths and the correspondence limit for the path integral; vector potentials; the Ito integral and gauge transformations; free particle and quadra

Full Text Available A subdomain precise integrationmethod is developed for the dynamical responses of periodic structures comprising many identical structural cells. The proposed method is based on the precise integrationmethod, the subdomain scheme, and the repeatability of the periodic structures. In the proposed method, each structural cell is seen as a super element that is solved using the precise integrationmethod, considering the repeatability of the structural cells. The computational efforts and the memory size of the proposed method are reduced, while high computational accuracy is achieved. Therefore, the proposed method is particularly suitable to solve the dynamical responses of periodic structures. Two numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method through comparison with the Newmark and Runge-Kutta methods.

The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.

A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

Web services have been opening a wide avenue for software integration. In this paper, we have reported our experiments with three applications that are built by utilizing and providing web services for Geographic Information Systems (GIS...

for analysing such data carry the potential to revolutionize tasks such as medical diagnostics where often decisions need to be based on only a few high-dimensional observations. This explosion in data dimensionality has sparked the development of novel statistical methods. In contrast, classical statistics...

In this chapter we present Kriging— also known as a Gaussian process (GP) model— which is a mathematical interpolation method. To select the input combinations to be simulated, we use Latin hypercube sampling (LHS); we allow uniform and non-uniform distributions of the simulation inputs. Besides

Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

Full Text Available Purpose. To demonstrate feasibility of the proposed integrated optimization of various MTS parameters to reduce capital investments as well as decrease any operational and maintenance expense. This will make use of MTS reasonable. At present, the Maglev Transport Systems (MTS for High-Speed Ground Transportation (HSGT almost do not apply. Significant capital investments, high operational and maintenance costs are the main reasons why Maglev Transport Systems (MTS are hardly currently used for the High-Speed Ground Transportation (HSGT. Therefore, this article justifies use of Theory of Complex Optimization of Transport (TCOT, developed by one of the co-authors, to reduce MTS costs. Methodology. According to TCOT, authors developed an abstract model of the generalized transport system (AMSTG. This model mathematically determines the optimal balance between all components of the system and thus provides the ultimate adaptation of any transport systems to the conditions of its application. To identify areas for effective use of MTS, by TCOT, the authors developed a dynamic model of distribution and expansion of spheres of effective use of transport systems (DMRRSEPTS. Based on this model, the most efficient transport system was selected for each individual track. The main estimated criterion at determination of efficiency of application of MTS is the size of the specific transportation tariff received from calculation of payback of total given expenses to a standard payback period or term of granting the credit. Findings. The completed multiple calculations of four types of MTS: TRANSRAPID, MLX01, TRANSMAG and TRANSPROGRESS demonstrated efficiency of the integrated optimization of the parameters of such systems. This research made possible expending the scope of effective usage of MTS in about 2 times. The achieved results were presented at many international conferences in Germany, Switzerland, United States, China, Ukraine, etc. Using MTS as an

Construction of high-capacity anode is highly important for the development of next-generation high-performance lithium ion batteries (LIBs). Herein we fabricate Si/Ag nanowires/reduced graphene oxide (Si/Ag NWs/rGO) integrated composite film by introducing binary conductive networks (Ag NWs and rGO) into Si active materials with the help of a facile vacuum-filtration method. Active Si nanoparticles are homogeneously encapsulated by binary Ag NWs-rGO conductive network, in which Ag NWs are interwoven among the rGO sheets. The electrochemical properties of the integrated Si/Ag NWs/rGO composite film are thoroughly characterized as anode of LIBs. Compared to the Si/rGO composite film, the integrated Si/Ag NWs/rGO composite film exhibits enhanced electrochemical performances with higher capacity, better high-rate capability and cycling stability (1269 mAh g"−"1 at 50 mA g"−"1 up to 50 cycles). The binary conductive network plays a positive role in the enhancement of performance due to its faster ion/electron transfer, and better anti-structure degradation caused by volume expansion during the cycling process.

Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The

Full Text Available The introduction of new materials as Power Point presentations are the most convenient way of teaching a course or to display a scientific paper. In order to support this function, most schools, universities, institutions, are equipped with projectors and computers. For controlling the presentation of the materials, the persons that are in charge with the presentation use, in most cases, both the keyboard of the computer as well as the mouse for the slides, thing that burdens, in a way, the direct communication (face to face with the audience. Of course, the invention of the wireless mouse allowed a sort of freedom in controlling from the distance the digital materials. Although there seems to appear a certain impediment: in order to be used, the mouse requires to be placed on a flat surface. This article aims at creating a new application prototype that will manipulate, only through the means of a light-beam instrument (laser fascicle, both the actions of the mouse as well as some of the elements offered by the keyboard on a certain application or presentation. The light fascicle will be „connected” to a calculus system only through the images that were captured by a simple webcam.

Full Text Available Interacting particle methods are increasingly used to sample from complex high-dimensional distributions. They have found a wide range of applications in applied probability, Bayesian statistics and information engineering. Understanding rigorously these new Monte Carlo simulation tools leads to fascinating mathematics related to Feynman-Kac path integral theory and their interacting particle interpretations. In these lecture notes, we provide a pedagogical introduction to the stochastic modeling and the theoretical analysis of these particle algorithms. We also illustrate these methods through several applications including random walk confinements, particle absorption models, nonlinear filtering, stochastic optimization, combinatorial counting and directed polymer models.

Many large simulations may be required to assess the performance of Yucca Mountain as a possible site for the nations first high level nuclear waste repository. A boundary integral equation method (BIEM) is described for numerical analysis of quasilinear steady unsaturated flow in homogeneous material. The applicability of the exponential model for the dependence of hydraulic conductivity on pressure head is discussed briefly. This constitutive assumption is at the heart of the quasilinear transformation. Materials which display a wide distribution in pore-size are described reasonably well by the exponential. For materials with a narrow range in pore-size, the exponential is suitable over more limited ranges in pressure head. The numerical implementation of the BIEM is used to investigate the infiltration from a strip source to a water table. The net infiltration of moisture into a finite-depth layer is well-described by results for a semi-infinite layer if αD > 4, where α is the sorptive number and D is the depth to the water table. the distribution of moisture exhibits a similar dependence on αD. 11 refs., 4 figs.,

An enormous array of problems encountered by scientists and engineers are based on the design of mathematical models using many different types of ordinary differential, partial differential, integral, and integro-differential equations. Accordingly, the solutions of these equations are of great interest to practitioners and to science in general. Presenting a wealth of cutting-edge research by a diverse group of experts in the field, IntegralMethods in Science and Engineering: Computational and Analytic Aspects gives a vivid picture of both the development of theoretical integral techniques

Josephson junction integrated circuits of the current injection type and magnetically controlled type utilize a superconductive layer that forms both Josephson junction electrode for the Josephson junction devices on the integrated circuit as well as a ground plane for the integrated circuit. Large area Josephson junctions are utilized for effecting contact to lower superconductive layers and islands are formed in superconductive layers to provide isolation between the groudplane function and the Josephson junction electrode function as well as to effect crossovers. A superconductor-barrier-superconductor trilayer patterned by local anodization is also utilized with additional layers formed thereover. Methods of manufacturing the embodiments of the invention are disclosed

We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.

Full Text Available The article describes a method of interactive learning based on educational integrating projects. Some examples of content of such projects for the disciplines related to the study of information and Internet technologies and their application in management are presented.

Overpack, a high-level radioactive waste package for geological disposal, seals vitrified waste and in line with Japan's waste management program is required to isolate it from contact with groundwater for 1,000 years. In this study, TIG (Tungsten Inert Gas) welding method, a typical arc welding method and widely used in various industries, was examined for its applicability to seal a carbon steel overpack lid with a thickness of 190 mm. Welding conditions and welding parameters were examined for multi-layer welding in a narrow gap for four different groove depths. Weld joint tests were conducted and weld flaws, macro- and microstructure, and mechanical properties were assessed within tentatively applied criteria for weld joints. Measurement and numerical calculation for residual stress were also conducted and the tendency of residual stress distribution was discussed. These test results were compared with the basic requirements of the welding method for overpack which were pointed out in our first report. It is assessed that the TIG welding method has the potential to provide the necessary requirements to complete the final closure of overpack with a maximum thickness of 190 mm. (author)

Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integratedmethod can be successfully used in solving mining engineering problems.

Full Text Available Recent Zika outbreaks in South America, accompanied by unexpectedly severe clinical complications have brought much interest in fast and reliable screening methods for ZIKV (Zika virus identification. Reverse-transcriptase polymerase chain reaction (RT-PCR is currently the method of choice to detect ZIKV in biological samples. This approach, nonetheless, demands a considerable amount of time and resources such as kits and reagents that, in endemic areas, may result in a substantial financial burden over affected individuals and health services veering away from RT-PCR analysis. This study presents a powerful combination of high-resolution mass spectrometry and a machine-learning prediction model for data analysis to assess the existence of ZIKV infection across a series of patients that bear similar symptomatic conditions, but not necessarily are infected with the disease. By using mass spectrometric data that are inputted with the developed decision-making algorithm, we were able to provide a set of features that work as a “fingerprint” for this specific pathophysiological condition, even after the acute phase of infection. Since both mass spectrometry and machine learning approaches are well-established and have largely utilized tools within their respective fields, this combination of methods emerges as a distinct alternative for clinical applications, providing a diagnostic screening—faster and more accurate—with improved cost-effectiveness when compared to existing technologies.

Recent Zika outbreaks in South America, accompanied by unexpectedly severe clinical complications have brought much interest in fast and reliable screening methods for ZIKV (Zika virus) identification. Reverse-transcriptase polymerase chain reaction (RT-PCR) is currently the method of choice to detect ZIKV in biological samples. This approach, nonetheless, demands a considerable amount of time and resources such as kits and reagents that, in endemic areas, may result in a substantial financial burden over affected individuals and health services veering away from RT-PCR analysis. This study presents a powerful combination of high-resolution mass spectrometry and a machine-learning prediction model for data analysis to assess the existence of ZIKV infection across a series of patients that bear similar symptomatic conditions, but not necessarily are infected with the disease. By using mass spectrometric data that are inputted with the developed decision-making algorithm, we were able to provide a set of features that work as a "fingerprint" for this specific pathophysiological condition, even after the acute phase of infection. Since both mass spectrometry and machine learning approaches are well-established and have largely utilized tools within their respective fields, this combination of methods emerges as a distinct alternative for clinical applications, providing a diagnostic screening-faster and more accurate-with improved cost-effectiveness when compared to existing technologies.

Virtual reality (VR) tools have already been developed and deployed in the nuclear industry, including in nuclear power plant construction, project management, equipment and system design, and training. Recognized as powerful tools for, inter alia, integration of data, simulation of activities, design of facilities, validation of concepts and mission planning, their application in nuclear safeguards is still very limited. However, VR tools may eventually offer transformative potential for evolving the future safeguards system to be more fully information-driven. The paper focuses especially on applications in the area of training that have been underway in the Department of Safeguards of the International Atomic Energy Agency. It also outlines future applications envisioned for safeguards information and knowledge management, and information-analytic collaboration. The paper identifies some technical and programmatic pre-requisites for realizing the integrative potential of VR technologies. If developed with an orientation to integratingapplications through compatible platforms, software, and models, virtual reality tools offer the long-term potential of becoming a real 'game changer,' enabling a qualitative leap in the efficiency and effectiveness of nuclear safeguards. The IAEA invites Member States, industry, and academia to make proposals as to how such integrating potential in the use of virtual reality technology for nuclear safeguards could be realized. (author)

This book focuses on one- and multi-dimensional linear integral and discrete Gronwall-Bellman type inequalities. It provides a useful collection and systematic presentation of known and new results, as well as many applications to differential (ODE and PDE), difference, and integral equations. With this work the author fills a gap in the literature on inequalities, offering an ideal source for researchers in these topics. The present volume is part 1 of the author’s two-volume work on inequalities. Integral and discrete inequalities are a very important tool in classical analysis and play a crucial role in establishing the well-posedness of the related equations, i.e., differential, difference and integral equations.

High-contrast imaging instruments are now being equipped with integral field spectrographs (IFSs) to facilitate the detection and characterization of faint substellar companions. Algorithms currently envisioned to handle IFS data, such as the Locally Optimized Combination of Images (LOCI) algorithm, rely on aggressive point-spread function (PSF) subtraction, which is ideal for initially identifying companions but results in significantly biased photometry and spectroscopy owing to unwanted mixing with residual starlight. This spectrophotometric issue is further complicated by the fact that algorithmic color response is a function of the companion's spectrum, making it difficult to calibrate the effects of the reduction without using iterations involving a series of injected synthetic companions. In this paper, we introduce a new PSF calibration method, which we call 'damped LOCI', that seeks to alleviate these concerns. By modifying the cost function that determines the weighting coefficients used to construct PSF reference images, and also forcing those coefficients to be positive, it is possible to extract companion spectra with a precision that is set by calibration of the instrument response and transmission of the atmosphere, and not by post-processing. We demonstrate the utility of this approach using on-sky data obtained with the Project 1640 IFS at Palomar. Damped LOCI does not require any iterations on the underlying spectral type of the companion, nor does it rely on priors involving the chromatic and statistical properties of speckles. It is a general technique that can readily be applied to other current and planned instruments that employ IFSs.

The development of the modern accelerator and free-electron laser projects requires to consider wake fields of very short bunches in arbitrary three dimensional structures. To obtain the wake numerically by direct integration is difficult, since it takes a long time for the scattered fields to catch up to the bunch. On the other hand no general algorithm for indirect wake field integration is available in the literature so far. In this paper we review the know indirect methods to compute wake potentials in rotationally symmetric and cavity-like three dimensional structures. For arbitrary three dimensional geometries we introduce several new techniques and test them numerically. (Orig.)

Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

Full Text Available In this article, we consider the nonlinear Duffing-van der Pol-type oscillator system by means of the first integralmethod. This system has physical relevance as a model in certain flow-induced structural vibration problems, which includes the van der Pol oscillator and the damped Duffing oscillator etc as particular cases. Firstly, we apply the Division Theorem for two variables in the complex domain, which is based on the ring theory of commutative algebra, to explore a quasi-polynomial first integral to an equivalent autonomous system. Then, through solving an algebraic system we derive the first integral of the Duffing-van der Pol-type oscillator system under certain parametric condition.

This is a very interesting collection of introductory and review articles on the theory and applications of classical and quantum integrable systems. The book reviews several integrable systems such as the KdV equation, vertex models, RSOS and IRF models, spin chains, integrable differential equations, discrete systems, Ising, Potts and other lattice models and reaction--diffusion processes, as well as outlining major methods of solving integrable systems. These include Lax pairs, Baecklund and Miura transformations, the inverse scattering method, various types of the Bethe Ansatz, Painleve methods, the dbar method and fusion methods to mention just a few. The book is divided into two parts, each containing five chapters. The first part is devoted to classical integrable systems and introduces the subject through the KdV equation, and then proceeds through Painleve analysis, discrete systems and two-dimensional integrable partial differential equations, to culminate in the review of solvable lattice models in statistical physics, solved through the coordinate and algebraic Bethe Ansatz methods. The second part deals with quantum integrable systems, and begins with an outline of unifying approaches to quantum, statistical, ultralocal and non-ultralocal systems. The theory and methods of solving quantum integrable spin chains are then described. Recent developments in applying Bethe Ansatz methods in condensed matter physics, including superconductivity and nanoscale physics, are reviewed. The book concludes with an introduction to diffusion-reaction processes. Every chapter is devoted to a different subject and is self-contained, and thus can be read separately. A reader interesting in classical methods of solitons, such as the methods of solving the KdV equation, can start from Chapter 1, while a reader interested in the Bethe Ansatz method can immediately proceed to Chapter 5, and so on. Thus the book should appeal and be useful to a wide range of theoretical

. Moreover, configurators are commonly integrated to various IT systems within companies. The complexity of configurators is an important factor when it comes to performance, development and maintenance of the systems. A direct comparison of the complexity based on the different application...... integrations to other IT systems. The research method adopted in the paper is based on a survey followed with interviews where the unit of analysis is based on operating configurators within a company.......Configurators are applied widely to automate the specification processes at companies. The literature describes the industrial application of configurators supporting both sales and engineering processes, where configurators supporting the engineering processes are described more challenging...

Dynamic Systems and Applications (07 2013) Aghalaya S. Vatsala, Bhuvaneswari Sambandham. Laplace Transform Method for Sequential CaputoFractional...coupled minimal and maximal solutions for such an equation and a numerical example is provided as an application of the theoretical results. The... Applications The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of

This study demonstrated the feasibility of using non-aqueous ion exchange liquid chromatography (NIELC) for the examination of the tetrahydrofuran (THF)-soluble distillation resids and THF-soluble whole oils derived from direct coal liquefaction. The technique can be used to separate the material into a number of acid, base, and neutral fractions. Each of the fractions obtained by NIELC was analyzed and then further fractionated by high-performance liquid chromatography (HPLC). The separation and analysis schemes are given in the accompanying report. With this approach, differences can be distinguished among samples obtained from different process streams in the liquefaction plant and among samples obtained at the same sampling location, but produced from different feed coals. HPLC was directly applied to one THF-soluble whole process oil without the NIELC preparation, with limited success. The direct HPLC technique used was directed toward the elution of the acid species into defined classes. The non-retained neutral and basic components of the oil were not analyzable by the direct HPLC method because of solubility limitations. Sample solubility is a major concern in the application of these techniques.

Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

This paper focuses on application of the symplectic integrator to numerical fluid analysis. For the purpose, we introduce Hamiltonian particle dynamics to simulate fluid behavior. The method is based on both the Hamiltonian formulation of a system and the particle methods, and is therefore called Hamiltonian Particle Dynamics (HPD). In this paper, an example of HPD applications, namely the behavior of incompressible inviscid fluid, is solved. In order to improve accuracy of HPD with respect to space, CIVA, which is a highly accurate interpolation method, is combined, but the combined method is subject to problems in that the invariants of the system are not conserved in a long-time computation. For solving the problems, symplectic time integrators are introduced and the effectiveness is confirmed by numerical analyses. (author)

Full Text Available It is without doubt, that most contemporary methods of language teaching are based on the Communicative language Teaching (CLT model. The principle that these methods share is that language can only be considered meaningful when it is not taught separately from its context, which is the context of the target language speakers. In other words, second and foreign language teachers are encouraged to pursue methods of instruction that seek to simultaneously improve not only the linguistic knowledge of the L2/foreign language learners (such as vocabulary and grammar but also their learning of the “appropriate” contextual meaning of this knowledge. To mention a few, these methods include the integrated content and language learning instruction (ICLI, theme based language instruction (TBI, Task based instruction (TBI and the integrated language and culture Instruction (ILCI. The last method of instruction which is the central subject of discussion in this study is not commonly addressed by most researchers despite its growing popularity in most foreign language teaching classrooms. It is mainly related to the theme based language instruction since it advocates for the teaching of language in tandem with topics in culture and civilisation and realises the importance of both culture (as content and language (as a medium of communication. This study unpacks this method, looking at its benefits and limitations when it comes to its application to the foreign language classroom. The major concern of this study therefore, is pedagogical implications of this method in actual foreign language teaching. To illustrate this, the study gives insights into learning of German in Zimbabwe, with the University of Zimbabwe as a close example. The underlying position in this study is that, while the integrated language and culture Instruction (ILCI method is a very attractive method on paper, there are a number of obstacles that can censor its practical application

This book provides an accessible introduction to the history, theory and techniques of informetrics. Divided into 14 chapters, it develops the content system of informetrics from the theory, methods and applications; systematically analyzes the six basic laws and the theory basis of informetrics and presents quantitative analysis methods such as citation analysis and computer-aided analysis. It also discusses applications in information resource management, information and library science, science of science, scientific evaluation and the forecast field. Lastly, it describes a new development in informetrics- webometrics. Providing a comprehensive overview of the complex issues in today's environment, this book is a valuable resource for all researchers, students and practitioners in library and information science.

This paper describes the development of risk-based structural integrity assurance methods and their application to Pressurized Water Reactor (PWR) plant. In-service inspection is introduced as a way of reducing the failure probability of high risk sites and the latter are identified using reliability analysis; the extent and interval of inspection can also be optimized. The methodology is illustrated by reference to the aspect of reliability of weldments in PWR systems. (author)

Full Text Available Electromagnetic band-gap (EBG surfaces have found applications in mitigation of parallel-plate noise that occurs in high speed circuits. A 2D periodic structure previously introduced by the same authors is dimensioned here for adjusting EBG parameters in view of meeting applications requirements by decreasing the phase velocity of the propagating waves. This adjustment corresponds to decreasing the lower bound of the EBG spectra. The positions of the EBGs' in frequency are determined through full-wave simulation, by solving the corresponding eigenmode equation and by imposing the appropriate boundary conditions on all faces of the unit cell. The operation of a device relying on a finite surface is also demonstrated. Obtained results show that the proposed structure fits for the signal integrity related applications as verified also by comparing the transmission along a finite structure of an ideal signal line and one with an induced discontinuity.

The article is devoted to the investigation of a polaron system on the base of a variational approach formulated on the language of continuum integration. The variational method generalizing the Feynman one for the case of the system pulse different from zero has been formulated. The polaron state has been investigated at zero temperature. A problem of the bound state of two polarons exchanging quanta of a scalar field as well as a problem of polaron scattering with an external field in the Born approximation have been considered. Thermodynamics of the polaron system has been investigated, namely, high-temperature expansions for mean energy and effective polaron mass have been studied [ru

This work comes from an industrial problem of validating numerical solutions of ordinary differential equations modeling power systems. This problem is solved using asymptotic estimators of the global error. Four techniques are studied: Richardson estimator (RS), Zadunaisky's techniques (ZD), integration of the variational equation (EV), and Solving for the correction (SC). We give some precisions on the relative order of SC w.r.t. the order of the numerical method. A new variant of ZD is proposed that uses the Modified Equation. In the case of variable step-size, it is shown that under suitable restriction, on the hypothesis of the step-size selection, ZD and SC are still valid. Moreover, some Runge-Kutta methods are shown to need less hypothesis on the step-sizes to exhibit a valid order of convergence for ZD and SC. Numerical tests conclude this analysis. Industrial cases are given. Finally, an algorithm to avoid the a priori specification of the integration path for complex time differential equations is proposed. (author)

This paper describes the design of an integrated BiCMOS circuit for high temperature applications. The circuit contains Pierce oscillators with automatic gain control, and measurements show that it is operating up to 266{sup o}C. The relative frequency variation up to 200 {sup o}C is less than 60 ppm caused mainly by the crystal element itself. 4 refs., 7 figs.

An in-depth examination of the cutting edge of biometrics. This book fills a gap in the literature by detailing the recent advances and emerging theories, methods, and applications of biometric systems in a variety of infrastructures. Edited by a panel of experts, it provides comprehensive coverage of:. Multilinear discriminant analysis for biometric signal recognition;. Biometric identity authentication techniques based on neural networks;. Multimodal biometrics and design of classifiers for biometric fusion;. Feature selection and facial aging modeling for face recognition;. Geometrical and

Power Integrated Circuits (PIC) is one of the most rapidly growing branches of the semiconductor technology. The PIC markets has been forecast to grow from 660 million dollars in 1990 to 1658 million dollars in 1994. It has even been forecast that at the end of the 1990's the PIC markets would correspond to the value of the whole semiconductor production in 1990. Automotive electronics will play the leading role in the development of the standard PIC's. Integrated motor drivers (36 V/4 A), smart integrated switches (60 V/30 A), solenoid drivers, integrated switch-mode power supplies and regulators are the latest standard devices of the PIC manufactures. ASIC (Application Specific Integrated Circuits) PIC solutions are needed for the same reasons as other ASIC devices: there are no proper standard devices, a company has a lot of application knowhow, which should be kept inside the company, the size of the product must be reduced, and assembly costs are wished to be reduced by decreasing the number of discrete devices. During the next few years the most probable ASIC PIC applications in Finland will be integrated solenoid and motor drivers, an integrated electronic lamp ballast circuit and various sensor interface circuits. Application of the PIC technologies to machines and actuators will strongly be increased all over the world. This means that various PIC's, either standard PIC's or full custom ASIC circuits, will appear in many products which compete with the corresponding Finnish products. Therefore the development of the PIC technologies must be followed carefully in order to immediately be able to apply the latest development in the smart power technologies and their design methods.

In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website. Some of the case studies use the software Distance, while others use R code. The book is in three parts. The first part addresses basic methods, the ...

The integral transport theory is widely used in practical reactor design calculations however it is computer time consuming for two dimensional calculations of large media. In the first part of this report a new treatment is presented; it is based on the Galerkin method: inside each region the total flux is expanded over a three component basis. Numerical comparison shows that this method can considerably reduce the computing time. The second part of the this report is devoted to homogeneization theory: a straightforward calculation of the fundamental mode for an heterogeneous cell is presented. At first general presentation of the problem is given, then it is simplified to plane geometry and numerical results are presented

Anthropomorphous robotic hands at microscales have been developed to receive information and perform tasks for biological applications. To emulate a human hand's dexterity, the microhand requires a master-slave interface with a wearable controller, force sensors, and perception displays for tele-manipulation. Recognizing the constraints and complexity imposed in developing feedback interface during miniaturization, this project address the need by creating an integrated cyber environment incorporating sensors with a microhand, haptic/visual display, and object model, to emulates human hands' psychophysical perception at microscale.

The monograph is written with a view to provide basic tools for researchers working in Mathematical Analysis and Applications, concentrating on differential, integral and finite difference equations. It contains many inequalities which have only recently appeared in the literature and which can be used as powerful tools and will be a valuable source for a long time to come. It is self-contained and thus should be useful for those who are interested in learning or applying the inequalities with explicit estimates in their studies.- Contains a variety of inequalities discovered which find numero

Full Text Available With an increasing diversity in American schools, teachers need to be able to collaborate in teaching. University courses are widely considered as a stage to demonstrate or model the ways of collaboration. To respond to this call, three authors team taught an integratedmethods course at an urban public university in the city of New York. Following a qualitative research design, this study explored both instructors‟ and pre-service teachers‟ experiences with this course. Study findings indicate that collaborative teaching of an integratedmethods course is feasible and beneficial to both instructors and pre-service teachers. For instructors, this collaborative teaching was a reciprocal learning process where they were engaged in thinking about teaching in a broader and innovative way. For pre-service teachers, this collaborative course not only helped them understand how three different subjects could be related to each other, but also provided opportunities for them to actually see how collaboration could take place in teaching. Their understanding of collaborative teaching was enhanced after the course.

Monolithic microwave integrated circuits (MMIC), which incorporate all the elements of a microwave circuit on a single semiconductor substrate, offer the potential for drastic reductions in circuit weight and volume and increased reliability, all of which make many new concepts in electronic circuitry for space applications feasible, including phased array antennas. NASA has undertaken an extensive program aimed at development of MMICs for space applications. The first such circuits targeted for development were an extension of work in hybrid (discrete component) technology in support of the Advanced Communication Technology Satellite (ACTS). It focused on power amplifiers, receivers, and switches at ACTS frequencies. More recent work, however, focused on frequencies appropriate for other NASA programs and emphasizes advanced materials in an effort to enhance efficiency, power handling capability, and frequency of operation or noise figure to meet the requirements of space systems.

An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed

There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...

Full Text Available Design strategies for parallel iterative algorithms are presented. In order to further study different tradeoff strategies in design criteria for integrated circuits, A 10 × 10 Jacobi Brent-Luk-EVD array with the simplified μ-CORDIC processor is used as an example. The experimental results show that using the μ-CORDIC processor is beneficial for the design criteria as it yields a smaller area, faster overall computation time, and less energy consumption than the regular CORDIC processor. It is worth to notice that the proposed parallel EVD method can be applied to real-time and low-power array signal processing algorithms performing beamforming or DOA estimation.

In this monograph the authors present Newton-type, Newton-like and other numerical methods, which involve fractional derivatives and fractional integral operators, for the first time studied in the literature. All for the purpose to solve numerically equations whose associated functions can be also non-differentiable in the ordinary sense. That is among others extending the classical Newton method theory which requires usual differentiability of function. Chapters are self-contained and can be read independently and several advanced courses can be taught out of this book. An extensive list of references is given per chapter. The book’s results are expected to find applications in many areas of applied mathematics, stochastics, computer science and engineering. As such this monograph is suitable for researchers, graduate students, and seminars of the above subjects, also to be in all science and engineering libraries.

The city of San Juan, in the Central-Western region of Argentina, has been the target of very destructive superficial earthquakes, some of which have not been associated to a clear structural source up to this date. The city is constantly growing outside the valley where it is located, towards the area of Eastern Precordillera which is currently having an increased socio-cultural activity. Thus, this study is focused on increasing the geological knowledge of the latter by studying the eastern flank of Sierra Chica de Zonda (Eastern Precordillera) whose proved neotectonic activity represents a geohazard. On the basis of the general geological setting the neotectonic structures in the study area are related to a major active synclinal folding located just under the western sector of the San Juan city. Geophysical potential methods (gravimetric and magnetometric surveys) were used to recognize contacts by contrast of density and magnetic susceptibility. In order to reduce the ambiguity of these methods the gravi-magnetometric results were constrained by using seismic and electrical tomographies. These contacts where geophysical properties abruptly change, were interpreted as faults despite many of them not having a superficial expression. The latter being of great importance to asses the seismic hazard of the study area.

In this paper, Numerov iterative method for second order integral-differential equation and system of equations are constructed. Numerical examples show that this method is better than direct method (Gauss elimination method) in CPU time and memoy requireing. Therefore, this method is an efficient method for solving integral-differential equation in nuclear physics

Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integratedmethod for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal

Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.

Full Text Available The purpose of this research work is to examine: (1 why are necessary the formal methods for software systems today, (2 high integrity systems through the methodology C-by-C –Correctness-by-Construction–, and (3 an affordable methodology to apply formal methods in software engineering. The research process included reviews of the literature through Internet, in publications and presentations in events. Among the Research results found that: (1 there is increasing the dependence that the nations have, the companies and people of software systems, (2 there is growing demand for software Engineering to increase social trust in the software systems, (3 exist methodologies, as C-by-C, that can provide that level of trust, (4 Formal Methods constitute a principle of computer science that can be applied software engineering to perform reliable process in software development, (5 software users have the responsibility to demand reliable software products, and (6 software engineers have the responsibility to develop reliable software products. Furthermore, it is concluded that: (1 it takes more research to identify and analyze other methodologies and tools that provide process to apply the Formal Software Engineering methods, (2 Formal Methods provide an unprecedented ability to increase the trust in the exactitude of the software products and (3 by development of new methodologies and tools is being achieved costs are not more a disadvantage for application of formal methods.

This work utilized advanced engineering in several fields to find solutions to the challenges presented by the integration of MEMS/NEMS with optoelectronics to realize a compact sensor system, comprised of a microfabricated sensor, VCSEL, and photodiode. By utilizing microfabrication techniques in the realization of the MEMS/NEMS component, the VCSEL and the photodiode, the system would be small in size and require less power than a macro-sized component. The work focused on two technologies, accelerometers and microphones, leveraged from other LDRD programs. The first technology was the nano-g accelerometer using a nanophotonic motion detection system (67023). This accelerometer had measured sensitivity of approximately 10 nano-g. The Integrated NEMS and optoelectronics LDRD supported the nano-g accelerometer LDRD by providing advanced designs for the accelerometers, packaging, and a detection scheme to encapsulate the accelerometer, furthering the testing capabilities beyond bench-top tests. A fully packaged and tested die was never realized, but significant packaging issues were addressed and many resolved. The second technology supported by this work was the ultrasensitive directional microphone arrays for military operations in urban terrain and future combat systems (93518). This application utilized a diffraction-based sensing technique with different optical component placement and a different detection scheme from the nano-g accelerometer. The Integrated NEMS LDRD supported the microphone array LDRD by providing custom designs, VCSELs, and measurement techniques to accelerometers that were fabricated from the same operational principles as the microphones, but contain proof masses for acceleration transduction. These devices were packaged at the end of the work.

In a recent paper it was pointed out that the weakly singular integral equations of neutron transport can be quite conveniently solved by a method based on subtraction of singularity. This previous paper was devoted entirely to the consideration of simple one-dimensional isotropic-scattering and one-group problems. The present paper constitutes interesting extensions of the previous work in that in addition to a typical two-group anisotropic-scattering albedo problem in the slab geometry, the method is also applied to an isotropic-scattering problem in the x-y geometry. These results are compared with discrete S/sub N/ (ANISN or TWOTRAN-II) results, and for the problems considered here, the proposed method is found to be quite effective. Thus, the method appears to hold considerable potential for future applications. (auth)

Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.

Full Text Available In the paper, the authors survey integral representations of the Catalan numbers and the Catalan–Qi function, discuss equivalent relations between these integral representations, supply alternative and new proofs of several integral representations, collect applications of some integral representations, and present sums of several power series whose coefficients involve the Catalan numbers.

Integral Equation Methods for Electromagnetic and Elastic Waves is an outgrowth of several years of work. There have been no recent books on integral equation methods. There are books written on integral equations, but either they have been around for a while, or they were written by mathematicians. Much of the knowledge in integral equation methods still resides in journal papers. With this book, important relevant knowledge for integral equations are consolidated in one place and researchers need only read the pertinent chapters in this book to gain important knowledge needed for integral eq

essential integrability features of an integrable differential equation is a .... With this in mind we first write x3(t) as a cubic polynomial in (xn−1,xn,xn+1) and then ..... coefficients, the quadratic equation in xn+N has real and distinct roots which in ...

Human induced pluripotent stem cells (hiPSCs1–3) are useful in disease modeling and drug discovery, and they promise to provide a new generation of cell-based therapeutics. To date there has been no systematic evaluation of the most widely used techniques for generating integration-free hiPSCs. Here we compare Sendai-viral (SeV)4, episomal (Epi)5 and mRNA transfection mRNA6 methods using a number of criteria. All methods generated high-quality hiPSCs, but significant differences existed in aneuploidy rates, reprogramming efficiency, reliability and workload. We discuss the advantages and shortcomings of each approach, and present and review the results of a survey of a large number of human reprogramming laboratories on their independent experiences and preferences. Our analysis provides a valuable resource to inform the use of specific reprogramming methods for different laboratories and different applications, including clinical translation. PMID:25437882

The quantification of key variables such as oxygen, pH, carbon dioxide, glucose, and temperature provides essential information for biological and biotechnological applications and their development. Microfluidic devices offer an opportunity to accelerate research and development in these areas due to their small scale, and the fine control over the microenvironment, provided that these key variables can be measured. Optical sensors are well-suited for this task. They offer non-invasive and non-destructive monitoring of the mentioned variables, and the establishment of time-course profiles without the need for sampling from the microfluidic devices. They can also be implemented in larger systems, facilitating cross-scale comparison of analytical data. This tutorial review presents an overview of the optical sensors and their technology, with a view to support current and potential new users in microfluidics and biotechnology in the implementation of such sensors. It introduces the benefits and challenges of sensor integration, including, their application for microbioreactors. Sensor formats, integrationmethods, device bonding options, and monitoring options are explained. Luminescent sensors for oxygen, pH, carbon dioxide, glucose and temperature are showcased. Areas where further development is needed are highlighted with the intent to guide future development efforts towards analytes for which reliable, stable, or easily integrated detection methods are not yet available.

Full Text Available In recent years the most popular subject in Information System area is Enterprise ApplicationIntegration (EAI. It can be defined as a process of forming a standart connection between different systems of an organization?s information system environment. The incorporating, gaining and marriage of corporations are the major reasons of popularity in Enterprise ApplicationIntegration. The main purpose is to solve the applicationintegrating problems while similar systems in such corporations continue working together for a more time. With the help of XML technology, it is possible to find solutions to the problems of applicationintegration either within the corporation or between the corporations.

We give a short and elementary introduction to Lie group methods. A selection of applications of Lie group integrators are discussed. Finally, a family of symplectic integrators on cotangent bundles of Lie groups is presented and the notion of discrete gradient methods is generalised to Lie groups

The modern state of approximate integralmethods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integralmethods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integralmethod of heat balance, the improved Volkov integralmethod, the matched integralmethod, the modified Hristov method, the Mayer integralmethod, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integralmethod of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

This work describes a method for estimating the effluent concentrations of radioactive tracers in production wells, considering well to well injection tests and piston-like displacements of fluids in the reservoir. The model for tracer transportation takes into account effects of convection and hydrodynamic dispersion. (author)

This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing

The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

A unique guide to the state of the art of tracking, classification, and sensor management. This book addresses the tremendous progress made over the last few decades in algorithm development and mathematical analysis for filtering, multi-target multi-sensor tracking, sensor management and control, and target classification. It provides for the first time an integrated treatment of these advanced topics, complete with careful mathematical formulation, clear description of the theory, and real-world applications. Written by experts in the field, Integrated Tracking, Classification, and Sensor Management provides readers with easy access to key Bayesian modeling and filtering methods, multi-target tracking approaches, target classification procedures, and large scale sensor management problem-solving techniques.

The invention relates to an integrated circuit and to a method of arbitration in a network on an integrated circuit. According to the invention, a method of arbitration in a network on an integrated circuit is provided, the network comprising a router unit, the router unit comprising a first input

Full Text Available The method of brackets is a collection of heuristic rules, some of which have being made rigorous, that provide a flexible, direct method for the evaluation of definite integrals. The present work uses this method to establish classical formulas due to Frullani which provide values of a specific family of integrals. Some generalizations are established.

Different viewpoints on the asymptotic expansion of Feynman diagrams are reviewed. The relations between the field theoretic and diagrammatic approaches are sketched. The focus is on problems with large masses or large external momenta. Several recent applications also for other limiting cases are touched upon. Finally, the pros and cons of the different approaches are briefly discussed. (author)

heoretical researches relating to excitation spectrum of furan have been carried out for many years, and they reveal the problems that should be solved in order to predict highly reliable excitation energy. In general, it is difficult to uniformly obtain highly reliable calculation results for all excitation states since different excitation states show different electronic correlative effects. Means for obtaining the electron states in ground state and excited state and calculating the energy difference thereof is the mainstream of the theoretical calculation of the excitation energy. CASSCF/CASPT 2 developed by Roos et al. is a typical method excellent in quantitative description. Recently, the comparison between direct CCLR and CASSCF/CASPT 2 as examples for calculating the excitation spectrum of furan was carried out by using the same ground function. For Rydberg excitation, CC3, CAS, CASPT 2 show good agreement with each other. (NEDO)

In this paper we present an approach for designing interaction behaviour in service-oriented enterprise applicationintegration. The approach enables business analysts to actively participate in the design of an integration solution. In this way, we expect that the solution meets its integration

The notion of integration with respect to the Euler characteristic and its generalizations are discussed: integration over the infinite-dimensional spaces of arcs and functions, motivic integration. The author describes applications of these notions to the computation of monodromy zeta functions, Poincare series of multi-index filtrations, generating series of classes of certain moduli spaces, and so on. Bibliography: 70 titles.

The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on

With the size reduction of nanoscale electronic devices, the heat generated by the unit area in integrated circuits will be increasing exponentially, and consequently the thermal management in these devices is a very important issue. In addition, the heat generated by the electronic devices mostly diffuses to the air in the form of waste heat, which makes the thermoelectric energy conversion also an important issue for nowadays. In recent years, the thermal transport properties in nanoscale systems have attracted increasing attention in both experiments and theoretical calculations. In this review, we will discuss various theoretical simulation methods for investigating thermal transport properties and take a glance at several interesting thermal transport phenomena in nanoscale systems. Our emphasizes will lie on the advantage and limitation of calculational method, and the application of nanoscale thermal transport and thermoelectric property. Project supported by the Nation Key Research and Development Program of China (Grant No. 2017YFB0701602) and the National Natural Science Foundation of China (Grant No. 11674092).

In this paper, an adaptive multilevel algorithm for integral equations is described that has been developed with the Chandrasekhar H equation and its generalizations in mind. The algorithm maintains good performance when the Frechet derivative of the nonlinear map is singular at the solution, as happens in radiative transfer with conservative scattering and in critical neutron transport. Numerical examples that demonstrate the algorithm's effectiveness are presented

Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.

Some of the exclusive features of the book are: Every concept has been explained with the help of solved examples. Working rules showing the various steps for the applications of formulae have also been given. The diagrams and graphs have been neatly and correctly drawn in such a way that the students have the complete understanding of the problem by simply looking at them. Efforts have been made to make the subject throughly exhaustive and nothing important has been omitted. Answer to all the problems have been throughly checked. It is a user-friendly book containing many, solved problems and

Recent developments in the evaluation of direct time integrationmethods for the transient response analysis of nonlinear structures are presented. These developments, which are based on local stability considerations of an integrator, show that the interaction between temporal step size and nonlinearities of structural systems has a pronounced effect on both accuracy and stability of a given time integrationmethod. The resulting evaluation technique is applied to a model nonlinear problem, in order to: 1) demonstrate that it eliminates the present costly process of evaluating time integrator for nonlinear structural systems via extensive numerical experiments; 2) identify the desirable characteristics of time integrationmethods for nonlinear structural problems; 3) develop improved stiffly-stable methods for application to nonlinear structures. Extension of the methodology for examination of the interaction between a time integrator and the approximate treatment of nonlinearities (such as due to pseudo-force or incremental solution procedures) is also discussed. (Auth.)

Matrix completion has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.

Highlights: • A new LCA integrated thermoeconomic approach is presented. • The new unit fuel cost is found 4.8 times higher than the classic method. • The new defined parameter increased the sustainability index by 67.1%. • The case studies are performed for countries with different CO 2 prices. - Abstract: Life cycle assessment (LCA) based thermoeconomic modelling has been applied for the evaluation of energy conversion systems since it provided more comprehensive and applicable assessment criteria. This study proposes an improved thermoeconomic method, named as life cycle integrated thermoeconomic assessment (LCiTA), which combines the LCA based enviroeconomic parameters in the production steps of the system components and fuel with the conventional thermoeconomic method for the energy conversion systems. A micro-cogeneration system is investigated and analyzed with the LCiTA method, the comparative studies show that the unit cost of fuel by using the LCiTA method is 3.8 times higher than the conventional thermoeconomic model. It is also realized that the enviroeconomic parameters during the operation of the system components do not have significant impacts on the system streams since the exergetic parameters are dominant in the thermoeconomic calculations. Moreover, the improved sustainability index is found roundly 67.2% higher than the previously defined sustainability index, suggesting that the enviroeconomic and thermoeconomic parameters decrease the impact of the exergy destruction in the sustainability index definition. To find the feasible operation conditions for the micro-cogeneration system, different assessment strategies are presented. Furthermore, a case study for Singapore is conducted to see the impact of the forecasted carbon dioxide prices on the thermoeconomic performance of the micro-cogeneration system.

With today's information explosion, many organizations are now able to access a wealth of valuable data. Unfortunately, most of these organizations find they are ill-equipped to organize this information, let alone put it to work for them. Gain a Competitive Advantage Employ data mining in research and forecasting Build models with data management tools and methodology optimization Gain sophisticated breakdowns and complex analysis through multivariate, evolutionary, and neural net methodsLearn how to classify data and maintain qualityTransform Data into Business Acumen Data Mining Methods and

In this paper, the authors present a digital system requirements specification method that has demonstrated a potential for improving the completeness of requirements while reducing ambiguity. It assists with making proper digital system design decisions, including the defense against specific digital system failures modes. It also helps define the technical rationale for all of the component and interface requirements. This approach is a procedural method that abstracts key features that are expanded in a partitioning that identifies and characterizes hazards and safety system function requirements. The key system features are subjected to a hierarchy that progressively defines their detailed characteristics and components. This process produces a set of requirements specifications for the system and all of its components. Based on application to nuclear power plants, the approach described here uses two ordered domains: plant safety followed by safety system integrity. Plant safety refers to those systems defined to meet the safety goals for the protection of the public. Safety system integrity refers to systems defined to ensure that the system can meet the safety goals. Within each domain, a systematic process is used to identify hazards and define the corresponding means of defense and mitigation. In both domains, the approach and structure are focused on the completeness of information and eliminating ambiguities in the generation of safety system requirements that will achieve the plant safety goals

A new algorithm, based on systems of identical equalities with integral and differential boundary characteristics, is proposed for solving boundary-value problems on the heat conduction in bodies canonical in shape at a Neumann boundary condition. Results of a numerical analysis of the accuracy of solving heat-conduction problems with variable boundary conditions with the use of this algorithm are presented. The solutions obtained with it can be considered as exact because their errors comprise hundredths and ten-thousandths of a persent for a wide range of change in the parameters of a problem.

This contributed volume contains the research results of the Cluster of Excellence “Integrative Production Technology for High-Wage Countries”, funded by the German Research Society (DFG). The approach to the topic is genuinely interdisciplinary, covering insights from fields such as engineering, material sciences, economics and social sciences. The book contains coherent deterministic models for integrative product creation chains as well as harmonized cybernetic models of production systems. The content is structured into five sections: Integrative Production Technology, Individualized Production, Virtual Production Systems, Integrated Technologies, Self-Optimizing Production Systems and Collaboration Productivity.The target audience primarily comprises research experts and practitioners in the field of production engineering, but the book may also be beneficial for graduate students. .

Full Text Available We establish new multiple iterated Volterra-Fredholm type integral inequalities, where the composite function w(u(s of the unknown function u with nonlinear function w in integral functions in [Ma, QH, Pečarić, J: Estimates on solutions of some new nonlinear retarded Volterra-Fredholm type integral inequalities. Nonlinear Anal. 69 (2008 393–407] is changed into the composite functions w1(u(s,w2(u(s,…, wn (u(s of the unknown function u with different nonlinear functions w1,w2,…,wn, respectively. By adopting novel analysis techniques, the upper bounds of the embedded unknown functions are estimated explicitly. The derived results can be applied in the study of solutions of ordinary differential equations and integral equations.

The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

As first part on the numerical research of singular problems, a numerical method is proposed for singular integrals. It is shown that the procedure is quite powerful for solving physics calculation with singularity such as the plasma dispersion function. Useful quadrature formulas for some class of the singular integrals are derived. In general, integrals with more complex singularities can be dealt by this method easily

Electrodeposition technique has been around for a very long time. It is a process of coating a thin layer of one metal on top of a different metal to modify its surface properties, by donating electrons to the ions in a solution. This bottom-up fabrication technique is versatile and can be applied to a wide range of potential applications. Electrodeposition is gaining popularity in recent years due to its capability in fabricating one-dimensional nano structures such as nano rods, nao wires and nano tubes. In this paper, we present an overview on the fabrication and characterization of high aspect ratio nano structures prepared using the nano electrochemical deposition system set up in our laboratory. (author)

An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V.sub.DD, to an IC under test and measures a transient voltage component, V.sub.DDT, signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V.sub.DDT signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V.sub.DDT signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.

This textbook introduces readers to the basic concepts of quasi-Monte Carlo methods for numerical integration and to the theory behind them. The comprehensive treatment of the subject with detailed explanations comprises, for example, lattice rules, digital nets and sequences and discrepancy theory. It also presents methods currently used in research and discusses practical applications with an emphasis on finance-related problems. Each chapter closes with suggestions for further reading and with exercises which help students to arrive at a deeper understanding of the material presented. The book is based on a one-semester, two-hour undergraduate course and is well-suited for readers with a basic grasp of algebra, calculus, linear algebra and basic probability theory. It provides an accessible introduction for undergraduate students in mathematics or computer science.

Vibration analysis has been used for years to provide a determination of the proper functioning of different types of machinery, including rotating machinery and rocket engines. A determination of a malfunction, if detected at a relatively early stage in its development, will allow changes in operating mode or a sequenced shutdown of the machinery prior to a total failure. Such preventative measures result in less extensive and/or less expensive repairs, and can also prevent a sometimes catastrophic failure of equipment. Standard vibration analyzers are generally rather complex, expensive, and of limited portability. They also usually result in displays and controls being located remotely from the machinery being monitored. Consequently, a need exists for improvements in accelerometer electronic display and control functions which are more suitable for operation directly on machines and which are not so expensive and complex. The invention includes methods and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. The apparatus includes an accelerometer package having integral display and control functions. The accelerometer package is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine condition over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase over the selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated. The benefits of a vibration recording and monitoring system with controls and displays readily

Full Text Available Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

IntroductionThis circular provides an overview of selected activities that were conducted within the U.S. Geological Survey (USGS) IntegratedMethods Development Project, an interdisciplinary project designed to develop new tools and conduct innovative research requiring integration of geologic, geophysical, geochemical, and remote-sensing expertise. The project was supported by the USGS Mineral Resources Program, and its products and acquired capabilities have broad applications to missions throughout the USGS and beyond.In addressing challenges associated with understanding the location, quantity, and quality of mineral resources, and in investigating the potential environmental consequences of resource development, a number of field and laboratory capabilities and interpretative methodologies evolved from the project that have applications to traditional resource studies as well as to studies related to ecosystem health, human health, disaster and hazard assessment, and planetary science. New or improved tools and research findings developed within the project have been applied to other projects and activities. Specifically, geophysical equipment and techniques have been applied to a variety of traditional and nontraditional mineral- and energy-resource studies, military applications, environmental investigations, and applied research activities that involve climate change, mapping techniques, and monitoring capabilities. Diverse applied geochemistry activities provide a process-level understanding of the mobility, chemical speciation, and bioavailability of elements, particularly metals and metalloids, in a variety of environmental settings. Imaging spectroscopy capabilities maintained and developed within the project have been applied to traditional resource studies as well as to studies related to ecosystem health, human health, disaster assessment, and planetary science. Brief descriptions of capabilities and laboratory facilities and summaries of some

A renoramalizable model of quantum field theory involving several independent coupling parameters, λ 0 , ..., λ n and a normalization mass K is considered. If the model involves massive particles a formulation of the renormalization group should be used in which the β-functions are independent of the masses. The aim of the reduction method is to reduce the model to a description in terms of a single coupling parameter. Although the reduction method does not work for the gauge couplings it leads to reasonable mass constraints if applied to the Yukawa and the Higgs couplings. The underlying idea is that - whatever the fundamental interaction if going to be - eventually there is only one coupling which determines all parameters of the standard model. However, one should be skeptical about numerical results in the standard model. For the standard model is only an effective theory, its β-functions are only approximate and change on their lowest order coefficients may have large effects on the reduction solutions

A boundary integral equation (BIE) is developed for the application of the boundary element method to the multigroup neutron diffusion equations. The developed BIE contains no explicit scattering term; the scattering effects are taken into account by redefining the unknowns. Boundary elements of the linear and constant variety are utilised for validation of the developed boundary integral formulation

An analytical method has been developed and applied for solution of two-phase flow conservation equations. The test results for application of the model for simulation of BWR transients are presented and compared with the results obtained from application of the explicit method for integration of conservation equations. The test results show that with application of the analytical method for integration of conservation equations, the Courant limitation associated with explicit Euler method of integration was eliminated. The results obtained from application of the analytical method (with large time steps) agreed well with the results obtained from application of explicit method of integration (with time steps smaller than the size imposed by Courant limitation). The results demonstrate that application of the analytical approach significantly improves the numerical stability and computational efficiency.

This monograph on perturbation theory is based on various courses and lectures held by the authors at the ETH, Zurich and at the University of Texas, Austin. Its principal intention is to inform application-minded mathematicians, physicists and engineers about recent developments in this field. The reader is not assumed to have mathematical knowledge beyond what is presented in standard courses on analysis and linear algebra. Chapter I treats the transformations of systems of differential equations and the integration of perturbed systems in a formal way. These tools are applied in Chapter II to celestial mechanics and to the theory of tops and gyroscopic motion. Chapter III is devoted to the discussion of Hamiltonian systems of differential equations and exposes the algebraic aspects of perturbation theory showing also the necessary modifications of the theory in case of singularities. The last chapter gives the mathematical justification for the methods developed in the previous chapters and investigates important questions such as error estimations for the solutions and asymptotic stability. Each chapter ends with useful comments and an extensive reference to the original literature. (HJ) [de

A new momentum integral network method has been developed, and tested in the MINET computer code. The method was developed in order to facilitate the transient analysis of complex fluid flow and heat transfer networks, such as those found in the balance of plant of power generating facilities. The method employed in the MINET code is a major extension of a momentum integralmethod reported by Meyer. Meyer integrated the momentum equation over several linked nodes, called a segment, and used a segment average pressure, evaluated from the pressures at both ends. Nodal mass and energy conservation determined nodal flows and enthalpies, accounting for fluid compression and thermal expansion

This book presents a variety of techniques for solving ordinary differential equations analytically and features a wealth of examples. Focusing on the modeling of real-world phenomena, it begins with a basic introduction to differential equations, followed by linear and nonlinear first order equations and a detailed treatment of the second order linear equations. After presenting solution methods for the Laplace transform and power series, it lastly presents systems of equations and offers an introduction to the stability theory. To help readers practice the theory covered, two types of exercises are provided: those that illustrate the general theory, and others designed to expand on the text material. Detailed solutions to all the exercises are included. The book is excellently suited for use as a textbook for an undergraduate class (of all disciplines) in ordinary differential equations. .

We introduce a class of iterated integrals that generalize multiple polylogarithms to elliptic curves. These elliptic multiple polylogarithms are closely related to similar functions defined in pure mathematics and string theory. We then focus on the equal-mass and non-equal-mass sunrise integrals, and we develop a formalism that enables us to compute these Feynman integrals in terms of our iterated integrals on elliptic curves. The key idea is to use integration-by-parts identities to identify a set of integral kernels, whose precise form is determined by the branch points of the integral in question. These kernels allow us to express all iterated integrals on an elliptic curve in terms of them. The flexibility of our approach leads us to expect that it will be applicable to a large variety of integrals in high-energy physics.

Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integralmethod for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratu...

Full Text Available In this paper, a novel scheme is proposed to solve the first kind Cauchy integral equation over a finite interval. For this purpose, the regularization method is considered. Then, the collocation method with Fibonacci base function is applied to solve the obtained second kind singular integral equation. Also, the error estimate of the proposed scheme is discussed. Finally, some sample Cauchy integral equations stem from the theory of airfoils in fluid mechanics are presented and solved to illustrate the importance and applicability of the given algorithm. The tables in the examples show the efficiency of the method.

The efficiency of the phase-integralmethod developed by the present au­ thors has been shown both analytically and numerically in many publica­ tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de­ scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...

Recent developments in and around the SIESTA method of first-principles simulation of condensed matter are described and reviewed, with emphasis on (i) the applicability of the method for large and varied systems (ii) efficient basis sets for the standards of accuracy of density-functional methods (iii) new implementations, and (iv) extensions beyond ground-state calculations

This paper summarizes the mathematical basis of the finite element method. Attention is drawn to the natural development of the method from an engineering analysis tool into a general numerical analysis tool. A particular application to the stress analysis of rubber materials is presented. Special advantages and issues associated with the method are mentioned. (author). 4 refs., 3 figs

Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participato...

Full Text Available In this paper, we obtain some inequalities of Simpson’s inequality type for functions whose derivatives absolute values are quasi-preinvex function. Applications to some special means are considered.

Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.

An integrated, easy to use, economical package of microcomputer programs has been developed which can be used by small hydro developers to evaluate potential sites for small scale hydroelectric plants in British Columbia. The programs enable evaluation of sites located far from the nearest stream gauging station, for which streamflow data are not available. For each of the province's 6 hydrologic regions, a streamflow record for one small watershed is provided in the data base. The program can then be used to generate synthetic streamflow records and to compare results obtained by the modelling procedure with the actual data. The program can also be used to explore the significance of modelling parameters and to develop a detailed appreciation for the accuracy which can be obtained under various circumstances. The components of the program are an atmospheric model of precipitation; a watershed model that will generate a continuous series of streamflow data, based on information from the atmospheric model; a flood frequency analysis system that uses site-specific topographic data plus information from the atmospheric model to generate a flood frequency curve; a hydroelectric power simulation program which determines daily energy output for a run-of-river or reservoir storage site based on selected generation facilities and the time series generated in the watershed model; and a graphic analysis package that provides direct visualization of data and modelling results. This report contains a description of the programs, a user guide, the theory behind the model, the modelling methodology, and results from a workshop that reviewed the program package. 32 refs., 16 figs., 18 tabs.

An accessible introduction to the fundamentals of calculus needed to solve current problems in engineering and the physical sciences I ntegration is an important function of calculus, and Introduction to Integral Calculus combines fundamental concepts with scientific problems to develop intuition and skills for solving mathematical problems related to engineering and the physical sciences. The authors provide a solid introduction to integral calculus and feature applications of integration, solutions of differential equations, and evaluation methods. With logical organization coupled with cle

We present a synthesis of the methods used to solve the integral transport equation in neutronic. This formulation is above all used to compute solutions in 2D in heterogeneous assemblies. Three kinds of methods are described: - the collision probability method; - the interface current method; - the current coupling collision probability method. These methods don't seem to be the most effective in 3D. (author). 9 figs

International audience; Micro-resonators (MR) have become a key element for integrated optical sensors due to their integration capability and their easy fabrication with low cost polymer materials. Nowadays, there is a growing need on MRs as highly sensitive and selective functions especially in the areas of food and health. The context of this work is to implement and study integrated micro-ring resonators devoted to sensing applications. They are fabricated by processing SU8 polymer as cor...

In this paper we propose a new approach for service-oriented enterprise applicationintegration (EAI). Unlike current EAI solutions, which mainly focus on technological aspects, our approach allows business domain experts to get more involved in the integration process. First, we provide a technique

This book presents the theory and methods of GNSS remote sensing as well as its applications in the atmosphere, oceans, land and hydrology. It contains detailed theory and study cases to help the reader put the material into practice.

The combination of different intelligent methods is a very active research area in Artificial Intelligence (AI). The aim is to create integrated or hybrid methods that benefit from each of their components. The 3rd Workshop on “Combinations of Intelligent Methods and Applications” (CIMA 2012) was intended to become a forum for exchanging experience and ideas among researchers and practitioners who are dealing with combining intelligent methods either based on first principles or in the context of specific applications. CIMA 2012 was held in conjunction with the 22nd European Conference on Artificial Intelligence (ECAI 2012).This volume includes revised versions of the papers presented at CIMA 2012. .

During the last five years, Fuzzy Logic has gained enormous popularity, both in the academic and industrial worlds, breaking up the traditional resistance against changes thanks to its innovative approach to problems formalization. The success of this new methodology is pushing the creation of a brand new class of devices, called Fuzzy Machines, to overcome the limitations of traditional computing systems when acting as Fuzzy Systems and adequate Software Tools to efficiently develop new applications. This paper aims to present a complete development environment for the definition of fuzzy logic based applications. The environment is also coupled with a sophisticated software tool for semiautomatic synthesis and optimization of the rules with stability verifications. Later it is presented the architecture of WARP, a dedicate VLSI programmable chip allowing to compute in real time a fuzzy control process. The article is completed with two application examples, which have been carried out exploiting the aforementioned tools and devices.

A method for calculation of integral characteristics of thermal plumes is proposed. The method allows for determination of the integral parameters of plumes based on speed measurements performed with omnidirectional low velocity thermoanemometers. The method includes a procedure for calculation...... of the directional velocity (upward component of the mean velocity). The method is applied for determination of the characteristics of an asymmetric thermal plume generated by a sitting person. The method was validated in full-scale experiments in a climatic chamber with a thermal manikin as a simulator of a sitting...

Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

Holistic integrative medicine (HIM) is a new medical knowledge system, which is formed based on the theory of HIM. HIM treats people as a whole by combining the results of basic medical research, clinical practice and clinical research during the treatment process. The concept of HIM runs through the education and treatment of orthodontics. HIM is the trending norm of both modern medicine and orthodontics. This review is about the concept of HIM and the advantages and disadvantages of specialization. Moreover, this review also discusses the vital role of HIM in orthodontic treatment and development.

a model (Hasan and Raffensperger, 2006) to solve this problem: the integrated ... planning and labour allocation for that processing firm, but did not consider any fleet- .... the DBONP method actually finds such price information, and uses it.

A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

Full Text Available Integral foam has been used in the production of polymer materials for a long time. Metal integral foam casting systems are obtained by transferring and adapting polymer injection technology. Metal integral foam produced by casting has a solid skin at the surface and a foam core. Producing near-net shape reduces production expenses. Insurance companies nowadays want the automotive industry to use metallic foam parts because of their higher impact energy absorption properties. In this paper, manufacturing processes of aluminum integral foam with casting methods will be discussed.

With its balanced coverage of theory and applications along with standards and regulations, Risk Assessment: Theory, Methods, and Applications serves as a comprehensive introduction to the topic. The book serves as a practical guide to current risk analysis and risk assessment, emphasizing the possibility of sudden, major accidents across various areas of practice from machinery and manufacturing processes to nuclear power plants and transportation systems. The author applies a uniform framework to the discussion of each method, setting forth clear objectives and descriptions, while also shedding light on applications, essential resources, and advantages and disadvantages. Following an introduction that provides an overview of risk assessment, the book is organized into two sections that outline key theory, methods, and applications. * Introduction to Risk Assessment defines key concepts and details the steps of a thorough risk assessment along with the necessary quantitative risk measures. Chapters outline...

The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows

The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.

A brief review is presented of basic physical characteristics of laboratory, field and operating gamma methods, of their classifications and principles. The measuring instrumentation used and the current state of applications of nuclear gamma methods in coal and ore mining and related branches are described in detail. Principles and practical recommendations are given for safety at work when handling gamma sources. (B.S.)

In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)

Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

n this paper we address several topics relating to the development and implementation of volume integral and hybrid finite element methods for electromagnetic modeling. Comparisons of volume integral equation formulations with the finite element-boundary integralmethod are given in terms of accu...... of vanishing divergence within the element but non-zero curl. In addition, a new domain decomposition is introduced for solving array problems involving several million degrees of freedom. Three orders of magnitude CPU reduction is demonstrated for such applications....

Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.

Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

Volume integralmethods for solving nonlinear magnetostatics problems are considered in this paper. The integralmethod is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

Ships, with their high consumption of fossil fuels to power their engines, are significant air polluters. Emission reduction methods therefore need to be implemented and the aim of this paper is to assess the advantages and disadvantages of each emissions reduction method. Benefits of the different methods are compared, with their disadvantages and requirements, to determine the applicability of such solutions. The methods studied herein are direct water injection, humid air motor, sea water scrubbing, diesel particulate filter, selected catalytic reduction, design of engine components, exhaust gas recirculation and engine replacement. Results of the study showed that the usefulness of each emissions reduction method depends on the particular case and that an evaluation should be carried out for each ship. This study pointed out that methods to reduce ship emissions are available but that their applicability depends on each case.

A principal goal of the Project Integration Architecture (PIA) is to facilitate the meaningful inter-application transfer of application-value-added information. Such exchanging applications may be largely unrelated to each other except through their applicability to an overall project; however, the PIA effort recognizes as fundamental the need to make such applications cooperate despite wide disparaties either in the fidelity of the analyses carried out, or even the disciplines of the analysis. This paper discusses the approach and techniques applied and anticipated by the PIA project in treating this need.

The concept of geodiversity has rapidly gained the approval of scientists around the world (Wiedenbein 1993, Sharples 1993, Kiernan 1995, 1996, Dixon 1996, Eberhard 1997, Kostrzewski 1998, 2011, Gray 2004, 2008, 2013, Zwoliński 2004, Serrano, Ruiz- Flano 2007, Gordon et al. 2012). However, the problem recognition is still at an early stage, and in effect not explicitly understood and defined (Najwer, Zwoliński 2014). Nevertheless, despite widespread use of the concept, little progress has been made in its assessment and mapping. Less than the last decade can be observing investigation of methods for geodiversity assessment and its visualisation. Though, many have acknowledged the importance of geodiversity evaluation (Kozłowski 2004, Gray 2004, Reynard, Panizza 2005, Zouros 2007, Pereira et al. 2007, Hjort et al. 2015). Hitherto, only a few authors have undertaken that kind of methodological issues. Geodiversity maps are being created for a variety of purposes and therefore their methods are quite manifold. In the literature exists some examples of the geodiversity maps applications for the geotourism purpose, basing mainly on the geological diversity, in order to point the scale of the area's tourist attractiveness (Zwoliński 2010, Serrano and Gonzalez Trueba 2011, Zwoliński and Stachowiak 2012). In some studies, geodiversity maps were created and applied to investigate the spatial or genetic relationships with the richness of particular natural environmental components (Burnett et al. 1998, Silva 2004, Jačková, Romportl 2008, Hjort et al. 2012, 2015, Mazurek et al. 2015, Najwer et al. 2014). There are also a few examples of geodiversity assessment in order to geoconservation and efficient management and planning of the natural protected areas (Serrano and Gonzalez Trueba 2011, Pellitero et al. 2011, 2014, Jaskulska et al. 2013, Melelli 2014, Martinez-Grana et al. 2015). The most popular method of assessing the diversity of abiotic components of the natural

Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.

In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835

A new safety culture model is constructed and is applied to analyze the correlations between safety culture and SMS. On the basis of previous typical definitions, models and theories of safety culture, an in-depth analysis on safety culture's structure, composing elements and their correlations was conducted. A new definition of safety culture was proposed from the perspective of sub-cuhure. 7 types of safety sub-culture, which are safety priority culture, standardizing culture, flexible culture, learning culture, teamwork culture, reporting culture and justice culture were defined later. Then integrated safety culture model (ISCM) was put forward based on the definition. The model divided safety culture into intrinsic latency level and extrinsic indication level and explained the potential relationship between safety sub-culture and all safety culture dimensions. Finally in the analyzing of safety culture and SMS, it concluded that positive safety culture is the basis of im-plementing SMS effectively and an advanced SMS will improve safety culture from all around.

A coarse-mesh discrete nodal integral transport theory method has been developed for the efficient numerical solution of multidimensional transport problems of interest in reactor physics and shielding applications. The method, which is the discrete transport theory analogue and logical extension of the nodal Green's function method previously developed for multidimensional neutron diffusion problems, utilizes the same transverse integration procedure to reduce the multidimensional equations to coupled one-dimensional equations. This is followed by the conversion of the differential equations to local, one-dimensional, in-node integral equations by integrating back along neutron flight paths. One-dimensional and two-dimensional transport theory test problems have been systematically studied to verify the superior computational efficiency of the new method

Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

Helium mass-spectrometer leak test is the most sensitive leak test method. It gives very reliable and sensitive test results. In last few years application of helium leak testing has gained more importance due to increased public awareness of safety and environment pollution caused by number of growing chemical and other such industries. Helium leak testing is carried out and specified in most of the critical area applications like nuclear, space, chemical and petrochemical industries

A comprehensive source on mixed data analysis, Analysis of Mixed Data: Methods & Applications summarizes the fundamental developments in the field. Case studies are used extensively throughout the book to illustrate interesting applications from economics, medicine and health, marketing, and genetics. Carefully edited for smooth readability and seamless transitions between chaptersAll chapters follow a common structure, with an introduction and a concluding summary, and include illustrative examples from real-life case studies in developmental toxicolog

The numerical version of the Laplace asymptotics has been used to evaluate the coordinates of extrema of multivariate continuous and discontinuous test functions. The performed computer experiments demonstrate the high efficiency of the integrationmethod proposed. The saturating dependence of extremum coordinates on such parameters as a number of integration subregions and that of K going /theoretically/ to infinity has been studied in detail for the limitand being a ratio of two Laplace integrals with exponentiated K. The given method is an integral equivalent of that of weighted means. As opposed to the standard optimization methods of the zero, first and second order the proposed method can be successfully applied to optimize discontinuous objective functions, too. There are possibilities of applying the integrationmethod in the cases, when the conventional techniques fail due to poor analytical properties of the objective functions near extremal points. The proposed method is efficient in searching for both local and global extrema of multimodal objective functions. 12 refs.; 4 tabs

In order to improve the safety, economy and reliability of the operation of a nuclear power plant (NPP), a novel integrated management method is proposed based on the 'integration' concept of the computer and contemporary integrated manufacture systems (CIMS). The design of integrated management system for NPP is studied. In the design of this system, information integrationmethod based on the database and product data management (PDM) technology is adopted. In order to design and integrated management system satisfying the needs of NPP management, all activities of NPP are divided into different categories according to its characteristics. There are subsystems under the general management system to conduct the management work of different categories. All subsystems are interrelated in the environment of CIMS, but relatively independent. The application of CIMS to NPP provides a new way for scientific management of NPP, and makes the best of human, material and information resources. (authors)

For the one-dimensional geometries, the transport equation with linearly anisotropic scattering can be reduced to a single integral equation; this is a singular-kernel FREDHOLM equation of the second kind. When applying a conventional projective method that of GALERKIN, to the solution of this equation the well-known collision probability algorithm is obtained. Piecewise polynomial expansions are used to represent the flux. In the ANILINE code, the flux is supposed to be linear in plane geometry and parabolic in both cylindrical and spherical geometries. An integral relationship was found between the one-dimensional isotropic and anisotropic kernels; this allows to reduce the new matrix elements (issuing from the anisotropic kernel) to classic collision probabilities of the isotropic scattering equation. For cylindrical and spherical geometries used an approximate representation of the current was used to avoid an additional numerical integration. Reflective boundary conditions were considered; in plane geometry the reflection is supposed specular, for the other geometries the isotropic reflection hypothesis has been adopted. Further, the ANILINE code enables to deal with an incoming isotropic current. Numerous checks were performed in monokinetic theory. Critical radii and albedos were calculated for homogeneous slabs, cylinders and spheres. For heterogeneous media, the thermal utilization factor obtained by this method was compared with the theoretical result based upon a formula by BENOIST. Finally, ANILINE was incorporated into the multigroup APOLLO code, which enabled to analyse the MINERVA experimental reactor in transport theory with 99 groups. The ANILINE method is particularly suited to the treatment of strongly anisotropic media with considerable flux gradients. It is also well adapted to the calculation of reflectors, and in general, to the exact analysis of anisotropic effects in large-sized media [fr

Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...

In this paper, we develop two novel pricing methods for solving an integer program. We demonstrate the methods by solving an integrated commercial fishery planning model (IFPM). In this problem, a fishery manager must schedule fishing trawlers (determine when and where the trawlers should go fishing, and when the ...

This paper describes a method used to integrate a train of fast, nanosecond wide pulses. The pulses come from current transformers in a RF LINAC beamline. Because they are ac signals and have no dc component, true mathematical integration would yield zero over the pulse train period or an equally erroneous value because of a dc baseline shift. The circuit used to integrate the pulse train first stretches the pulses to 35 ns FWHM. The signals are then fed into a high-speed, precision rectifier which restores a true dc baseline for the following stage - a fast, gated integrator. The rectifier is linear over 55dB in excess of 25 MHz, and the gated integrator is linear over a 60 dB range with input pulse widths as short as 16 ns. The assembled system is linear over 30 dB with a 6 MHz input signal

Multigrid methods are ideal for solving the increasingly large-scale problems that arise in numerical simulations of physical phenomena because of their potential for computational costs and memory requirements that scale linearly with the degrees of freedom. Unfortunately, they have been historically limited by their applicability to elliptic-type problems and the need for special handling in their implementation. In this paper, we present an overview of several recent theoretical and algorithmic advances made by the TOPS multigrid partners and their collaborators in extending applicability of multigrid methods. specific examples that are presented include quantum chromodynamics, radiation transport, and electromagnetics

Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

This contributed volume contains a collection of articles on state-of-the-art developments on the construction of theoretical integral techniques and their application to specific problems in science and engineering. Written by internationally recognized researchers, the chapters in this book are based on talks given at the Thirteenth International Conference on IntegralMethods in Science and Engineering, held July 21–25, 2014, in Karlsruhe, Germany. A broad range of topics is addressed, from problems of existence and uniqueness for singular integral equations on domain boundaries to numerical integration via finite and boundary elements, conservation laws, hybrid methods, and other quadrature-related approaches. This collection will be of interest to researchers in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students in these disciplines and other professionals for whom integration is an essential tool.

Calculated doses for comparison with limits resulting from discharges into the environment should be summed across all relevant pathways and food groups to ensure adequate protection. Current methodology for assessments used in the radioactivity in Food and the Environment (R.I.F.E.) reports separate doses from pathways related to liquid discharges of radioactivity to the environment from those due to gaseous releases. Surveys of local inhabitant food consumption and occupancy rates are conducted in the vicinity of nuclear sites. Information has been recorded in an integrated way, such that the data for each individual is recorded for all pathways of interest. These can include consumption of foods, such as fish, crustaceans, molluscs, fruit and vegetables, milk and meats. Occupancy times over beach sediments and time spent in close proximity to the site is also recorded for inclusion of external and inhalation radiation dose pathways. The integrated habits survey data may be combined with monitored environmental radionuclide concentrations to calculate total dose. The criteria for successful adoption of a method for this calculation were: Reproducibility can others easily use the approach and reassess doses? Rigour and realism how good is the match with reality?Transparency a measure of the ease with which others can understand how the calculations are performed and what they mean. Homogeneity is the group receiving the dose relatively homogeneous with respect to age, diet and those aspects that affect the dose received? Five methods of total dose calculation were compared and ranked according to their suitability. Each method was labelled (A to E) and given a short, relevant name for identification. The methods are described below; A) Individual doses to individuals are calculated and critical group selection is dependent on dose received. B) Individual Plus As in A, but consumption and occupancy rates for high dose is used to derive rates for application in

A review of methods for the integration of reliability and design engineering was carried out to establish a reliability program philosophy, an initial set of methods, and procedures to be used by both the designer and reliability analyst. The report outlines a set of procedures which implements a philosophy that requires increased involvement by the designer in reliability analysis. Discussions of each method reviewed include examples of its application

WO15090426A1 Sensor evaluation device and method for operating said device Integrated sensor evaluation circuit for evaluating a sensor signal (14) received from a sensor (12), having a first connection (28a) for connection to the sensor and a second connection (28b) for connection to the sensor. The integrated sensor evaluation circuit comprises a configuration data memory (16) for storing configuration data which describe signal properties of a plurality of sensor control signals (26a-c). T...

We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

The lagrangian in the path integral solution of the master equation of a stationary Markov process is derived by application of the Ehrenfest-type theorem of quantum mechanics and the Cauchy method of finding inverse functions. Applied to the non-linear Fokker-Planck equation the authors reproduce the result obtained by integrating over Fourier series coefficients and by other methods.

Introduction: The Integrated Medical Model (IMM) Project represents one aspect of NASA's Human Research Program (HRP) to quantitatively assess medical risks to astronauts for existing operational missions as well as missions associated with future exploration and commercial space flight ventures. The IMM takes a probabilistic approach to assessing the likelihood and specific outcomes of one hundred medical conditions within the envelope of accepted space flight standards of care over a selectable range of mission capabilities. A specially developed Integrated Medical Evidence Database (iMED) maintains evidence-based, organizational knowledge across a variety of data sources. Since becoming operational in 2011, version 3.0 of the IMM, the supporting iMED, and the expertise of the IMM project team have contributed to a wide range of decision and informational processes for the space medical and human research community. This presentation provides an overview of the IMM conceptual architecture and range of application through examples of actual space flight community questions posed to the IMM project. Methods: Figure 1 [see document] illustrates the IMM modeling system and scenario process. As illustrated, the IMM computational architecture is based on Probabilistic Risk Assessment techniques. Nineteen assumptions and limitations define the IMM application domain. Scenario definitions include crew medical attributes and mission specific details. The IMM forecasts probabilities of loss of crew life (LOCL), evacuation (EVAC), quality time lost during the mission, number of medical resources utilized and the number and type of medical events by combining scenario information with in-flight, analog, and terrestrial medical information stored in the iMED. In addition, the metrics provide the integrated information necessary to estimate optimized in-flight medical kit contents under constraints of mass and volume or acceptable level of mission risk. Results and Conclusions

Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.

An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integralmethod yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)

Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integratedmethod for exploring conceptual equivalence within the process of adapting and translating measures. The integratedmethod involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integratedmethod are discussed. (c) 2010 Wiley Periodicals, Inc.

Effective use of the Fourier series boundary element method (FBEM) for everyday applications is hindered by the significant numerical problems that have to be overcome for its implementation. In the FBEM formulation for acoustics, some integrals over the angle of revolution arise, which need to be

Data integration is a crucial element in mixed methods analysis and conceptualization. It has three principal purposes: illustration, convergent validation (triangulation), and the development of analytic density or "richness." This article discusses such applications in relation to new technologies for social research, looking at three…

particularly indicated for construction projects with a high commitment to sustainability in general and for energy performance in particular. The literature review also reveals that the key factor in the process efficiency of all project delivery methods is collaboration between the actors involved in the project. Partnering methods can have a substantial positive influence on process performance. The study of the legal limitations imposed by the currently applicable public procurement Directive 2004/18/EC shows that even though a limited amount of tender options are available, is it possible to tender projects that apply integrated project delivery methods using the competitive dialogue procedure. Moreover, the recently approved but not yet enacted public procurement Directive 2014/24/ EU facilitates even further the use of competitive dialogue tenders for social housing energy renovations. Project delivery methods in European social housing energy renovations This study is based on five case studies, 36 questionnaires and 14 expert interviews, and identified four main project delivery methods for the energy renovation of social housing, namely: • Step-by-Step (SBS • Design-Bid-Build (DBB • Design-Build (DB • Design-Build-Maintain (DBM. SBS can be considered a major renovation when the replacement of a series of building components eventually produces the same final result as a renovation project. In order to optimise the service lives of building components, an SHO might choose to split a major renovation project into a series of minor renovations. Cost-efficiency is achieved by procuring a large number of replacements only when a particular component has reached the end of its service life. This project delivery method will not usually include a design phase because these interventions usually involve replacing building products and systems. DBB, DB and DBM take place all at once and involve design companies, construction companies and maintenance companies. The

A quantitative description of nuclear backscattering and reaction processes is made. Various formulas pertinent to nuclear microanalysis are assembled in a manner useful for experimental application. Convolution integrals relating profiles of atoms in a metal substrate to the nuclear reaction spectra obtained in the laboratory are described and computed. Energy straggling and multiple scattering are explicitly included and shown to be important. Examples of the application of the method to simple backscattering, oxide films, and implanted gas are discussed. 7 figures, 1 table

. This are processes such as thermo-forming, gas-assisted injection moulding and all kind of simultaneous multi-component polymer processing operations. Though, in all polymer processing operations free surfaces (or interfaces) are present and the dynamic of these surfaces are of interest. In the "3D Lagrangian...... IntegralMethod" to simulate viscoelastic flow, the governing equations are solved for the particle positions (Lagrangian kinematics). Therefore, the transient motion of surfaces can be followed in a particularly simple fashion even in 3D viscoelastic flow. The "3D Lagrangian IntegralMethod" is described...

Full Text Available Advances in manufacturing process technology are key ensembles for the production of integrated circuits in the sub-micrometer region. It is of paramount importance to assess the effects of tolerances in the manufacturing process on the performance of modern integrated circuits. The polynomial chaos expansion has emerged as a suitable alternative to standard Monte Carlo-based methods that are accurate, but computationally cumbersome. This paper provides an overview of the most recent developments and challenges in the application of polynomial chaos-based techniques for uncertainty quantification in integrated circuits, with particular focus on high-dimensional problems.

Methods for 3D measurement are required for very varied applications in the industrial field. This includes tasks of quality assurance and plant monitoring, among others. It should be possible to apply the process flexibly it should require as short interruptions of production as possible and should meet the required accuracies. These requirements can be met by photogrammetric methods of measurement. The article introduces these methods and shows their capabilities from various selected examples (eg: the replacement of large components in a pressurized water reactor, and aircraft measurements (orig./DG) [de

Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

Basic aspects of the subject and methodology for a new and rapidly developing area of research that has emerged at the intersection of physics and control theory (cybernetics) and emphasizes the application of cybernetic methods to the study of physical systems are reviewed. Speed-gradient and Hamiltonian solutions for energy control problems in conservative and dissipative systems are presented. Application examples such as the Kapitza pendulum, controlled overcoming of a potential barrier, and controlling coupled oscillators and molecular systems are presented. A speed-gradient approach to modeling the dynamics of physical systems is discussed. (reviews of topical problems)

Basic aspects of the subject and methodology for a new and rapidly developing area of research that has emerged at the intersection of physics and control theory (cybernetics) and emphasizes the application of cybernetic methods to the study of physical systems are reviewed. Speed-gradient and Hamiltonian solutions for energy control problems in conservative and dissipative systems are presented. Application examples such as the Kapitza pendulum, controlled overcoming of a potential barrier, and controlling coupled oscillators and molecular systems are presented. A speed-gradient approach to modeling the dynamics of physical systems is discussed. (reviews of topical problems)

Full Text Available In article some questions of integration of training methods to foreign languages using means of informatization of education are considered.The attention that application of information technologies in teaching foreign languages integrally supplements is focused and expands possibilities of an effective solution of didactic tasks in case of creation of modern pedagogical models and is a certain factor of integration of methods and forms of education.

This paper highlights the general trend towards further monolithic integration in power applications by enabling power management and interfacing solutions in advanced CMOS nodes. The need to combine high-density digital circuits, power-management circuits, and robust interfaces in a single

This paper structures the summary of the panel held at the 9th International Conference on Enterprise Information Systems, Funchal, Madeira, 12-16 June 2007 that addressed the following question: "Are you still working on Inter-Enterprise System and ApplicationIntegration?" The panel aggregated

We discuss the applicability of schema integration techniques developed for tightly-coupled database interoperation to interoperation of databases stemming from different modelling contexts. We illustrate that in such an environment, it is typically quite difficult to infer the real-world semantics

Thermionic triode and integrated circuit technology is in its infancy and it is emerging. The Thermionic triode can operate at relatively high voltages (up to 2000V) and at least tens of amperes. These devices, including their use in integrated circuitry, operate at high temperatures (800 0 C) and are very tolerant to nuclear and other radiations. These properties can be very useful in large space power applications such as that represented by the SP-100 system which uses a nuclear reactor. This paper presents an assessment of the application of thermionic integrated circuitry with space nuclear power system technology. A comparison is made with conventional semiconductor circuitry considering a dissipative shunt regulator for SP-100 type nuclear power system rated at 100 kW. The particular advantages of thermionic circuitry are significant reductions in size and mass of heat dissipation and radiation shield subsystems

Discusses the application of computer-assisted learning methods to the interpretation of infrared, nuclear magnetic resonance, and mass spectra; and outlines extensions into the area of integrated spectroscopy. (Author/CMV)

Compared to other fields of engineering, in mechanical engineering, the Discrete Element Method (DEM) is not yet a well known method. Nevertheless, there is a variety of simulation problems where the method has obvious advantages due to its meshless nature. For problems where several free bodies can collide and break after having been largely deformed, the DEM is the method of choice. Neighborhood search and collision detection between bodies as well as the separation of large solids into smaller particles are naturally incorporated in the method. The main DEM algorithm consists of a relatively simple loop that basically contains the three substeps contact detection, force computation and integration. However, there exists a large variety of different algorithms to choose the substeps to compose the optimal method for a given problem. In this contribution, we describe the dynamics of particle systems together with appropriate numerical integration schemes and give an overview over different types of particle interactions that can be composed to adapt the method to fit to a given simulation problem. Surface triangulations are used to model complicated, non-convex bodies in contact with particle systems. The capabilities of the method are finally demonstrated by means of application examples

This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

Presents a series of analytic and numerical methods of solution constructed for important problems arising in science and engineering, based on the powerful operation of integration. This volume is meant for researchers and practitioners in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students.

The solution of a nonlinear integral equation of Hammerstein type in Hilbert spaces is approximated by means of a fixed point iteration method. Explicit error estimates are given and, in some cases, convergence is shown to be at least as fast as a geometric progression. (author). 25 refs

This paper titled “Philosophy and Method of Integrative Humanism and Religious Crises in Nigeria: Picking the Essentials”, acknowledges the damaging effects of religious bigotry, fanaticism and creed differences on the social, political and economic development of the country. The need for the cessation of religious ...

In 1991, the AACU issued a report on improving undergraduate education suggesting, in part, that a curriculum should be both comprehensive and cohesive. Since 2008, we have systematically integrated our research methods course with our capstone course in an attempt to accomplish the twin goals of comprehensiveness and cohesion. By taking this…

Confluent education is presented as a method to bridge the gap between cognitive and affective learning. Attention is focused on three main characteristics of confluent education: (a) the integration of four overlapping domains in a learning process (readiness, the cognitive domain, the affective

Full Text Available Therapeutic Involvement is an integral part of all effective psychotherapy.This article is written to illustrate the concept of Therapeutic Involvement in working within a therapeutic relationship – within the transference -- and with active expressive and experiential methods to resolve traumatic experiences, relational disturbances and life shaping decisions.

This book serves as a text for one- or two-semester courses for upper-level undergraduates and beginning graduate students and as a professional reference for people who want to solve partial differential equations (PDEs) using finite element methods. The author has attempted to introduce every concept in the simplest possible setting and maintain a level of treatment that is as rigorous as possible without being unnecessarily abstract. Quite a lot of attention is given to discontinuous finite elements, characteristic finite elements, and to the applications in fluid and solid mechanics including applications to porous media flow, and applications to semiconductor modeling. An extensive set of exercises and references in each chapter are provided.

An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)

The path integral representation has been successfully applied to the study of equilibrium properties of quantum systems for a long time. In particular, such a representation allowed Ginibre to prove the convergence of the low-fugacity expansions for systems with short-range interactions. First, I will show that the crucial trick underlying Ginibre's proof is the introduction of an equivalent classical system made with loops. Within the Feynman-Kac formula for the density matrix, such loops naturally emerge by collecting together the paths followed by particles exchanged in a given cyclic permutation. Two loops interact via an average of two- body genuine interactions between particles belonging to different loops, while the interactions between particles inside a given loop are accounted for in a loop fugacity. It turns out that the grand-partition function of the genuine quantum system exactly reduces to its classical counterpart for the gas of loops. The corresponding so-called magic formula can be combined with standard Mayer diagrammatics for the classical gas of loops. This provides low-density representations for the quantum correlations or thermodynamical functions, which are quite useful when collective effects must be taken into account properly. Indeed, resummations and or reorganizations of Mayer graphs can be performed by exploiting their remarkable topological and combinatorial properties, while statistical weights and bonds are purely c-numbers. The interest of that method will be illustrated through a brief description of its application to two long-standing problems, namely recombination in Coulomb systems and condensation in the interacting Bose gas.

The use of orthonormal wavelet basis functions for solving singular integral scattering equations is investigated. It is shown that these basis functions lead to sparse matrix equations which can be solved by iterative techniques. The scaling properties of wavelets are used to derive an efficient method for evaluating the singular integrals. The accuracy and efficiency of the wavelet transforms are demonstrated by solving the two-body T-matrix equation without partial wave projection. The resulting matrix equation which is characteristic of multiparticle integral scattering equations is found to provide an efficient method for obtaining accurate approximate solutions to the integral equation. These results indicate that wavelet transforms may provide a useful tool for studying few-body systems

Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments

Written by a team of pioneering scientists from around the world, Low Temperature Plasma Technology: Methods and Applications brings together recent technological advances and research in the rapidly growing field of low temperature plasmas. The book provides a comprehensive overview of related phenomena such as plasma bullets, plasma penetration into biofilms, discharge-mode transition of atmospheric pressure plasmas, and self-organization of microdischarges. It describes relevant technology and diagnostics, including nanosecond pulsed discharge, cavity ringdown spectroscopy, and laser-induce

A survey of microautoradiographic methods and of their application in biology is given. The current state of biological microautoradiography is shown, focusing on the efficiency of techniques and on special problems proceeding in autoradiographic investigations in biology. Four more or less independent fields of autoradiography are considered. In describing autoradiographic techniques two methodological tasks are emphasized: The further development of the labelling technique in all metabolic studies and of instrumentation and automation of autoradiograph evaluation. (author)

Most of the problems arising in science and engineering are nonlinear. They are inherently difficult to solve. Traditional analytical approximations are valid only for weakly nonlinear problems, and often break down for problems with strong nonlinearity. This book presents the current theoretical developments and applications of Keller-Box method to nonlinear problems. The first half of the bookaddresses basic concepts to understand the theoretical framework for the method. In the second half of the book, the authorsgive a number of examples of coupled nonlinear problems that have been solved

Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

Effective formalisms play an important role in analyzing phenomena above some given length scale when complete theories are not accessible. In diverse exotic but physically important cases, the usual path-integral techniques used in a standard Quantum Field Theory approach seldom serve as adequate tools. This thesis exposes a new effective method for quantum systems, called the Canonical Effective Method, which owns particularly wide applicability in backgroundindependent theories as in the case of gravitational phenomena. The central purpose of this work is to employ these techniques to obtain semi-classical dynamics from canonical quantum gravity theories. Application to non-associative quantum mechanics is developed and testable results are obtained. Types of non-associative algebras relevant for magnetic-monopole systems are discussed. Possible modifications of hypersurface deformation algebra and the emergence of effective space-times are presented. iii.

Full Text Available In this paper we prove the existence as well as approximations of the solutions for a certain nonlinear generalized quadratic functional integral equation. An algorithm for the solutions is developed and it is shown that the sequence of successive approximations starting at a lower or upper solution converges monotonically to the solutions of related quadratic functional integral equation under some suitable mixed hybrid conditions. We rely our main result on Dhage iteration method embodied in a recent hybrid fixed point theorem of Dhage (2014 in partially ordered normed linear spaces. An example is also provided to illustrate the abstract theory developed in the paper.

We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced

A high sensitive quench detection method which works even in the presence of an external perturbing magnetic field is reported. The quench signal is obtained from the difference in voltages at the superconducting winding terminals and at the terminals at a secondary winding strongly coupled to the primary. The secondary winding could consist of a ''zero-current strand'' of the superconducting cable not connected to one of the winding terminals or an integrated normal test wire inside the superconducting cable. Experimental results on quench detection obtained by this method are described. It is shown that the integrated test wire method leads to efficient and sensitive quench detection, especially in the presence of an external perturbing magnetic field

Full Text Available There is a lot of methods which are currently used for assessment of urban public transport system development and operation e.g. economic analysis, mostly Cost-Benefit Analysis – CBA, Cost-Effectiveness Analysis - CEA, hybrid methods, measurement methods (survey e.g. among passengers and measurement of traffic volume, vehicles capacity etc., and multicriteria decision aiding methods (multicriteria analysis. The main aim of multicriteria analysis is the choice of the most desirable solution from among alternative variants according to different criteria which are difficult to compare against one another. There are several multicriteria methods for assessment of urban public transport system development and operation, e.g. AHP, ANP, Electre, Promethee, Oreste. The paper presents an application of one of the most popular variant ranking methods – Electre III method. The algorithm of Electre III method usage is presented in detail and then its application for assessment of variants of urban public transport system integration in Cracow is shown. The final ranking of eight variants of integration of urban public transport system in Cracow (from the best to the worst variant was drawn up with the application of the Electre III method. For assessment purposes 10 criteria were adopted: economical, technical, environmental, and social; they form a consistent criteria family. The problem was analyzed with taking into account different points of view: city authorities, public transport operators, city units responsible for transport management, passengers and others users. Separate models of preferences for all stakeholders were created.

Laplace type integral transformation (LIT) has been applied to wavefunctions. The effect of the inverse transform is also discussed. LIT wavefunctions are tested in the calculation of the ground-state energy of H 2 + , where the untransformed functions were 1s, 12s, 123s and 1234s-STO. The results presented here show that LIT wavefunctions are applicable in molecular computations. The analytical formulae for two-centre one-electron integrals over LIT wavefunctions are derived by use of a Barnett-Coulson-like expansion of rsub(b)sup(N)(rsub(b)+p)sup(-ν). (orig.)

Within the past twenty years, new techniques and methods have emerged in response to new technologies that are based upon the performance of high-purity and well-characterized materials. The National Bureau of Standards, through its Standard Reference Materials (SRM's) Program, provides standards in the form of many of these materials to ensure accuracy and the compatibility of measurements throughout the US and the world. These standards, defined by the National Bureau of Standards as Standard Reference Materials (SRMs), are developed by using state-of-the-art methods and procedures for both preparation and analysis. Nuclear methods-activation analysis constitute an integral part of that analysis process

In this chapter some of the work developed at the Instituto de Investigaciones Electricas in the area of probabilistic risk analysis are presented. In this area, work has been basically focused in the following directions: development and implementation of methods, and applications to real systems. The first part of this paper describes the area of methods development and implementation, presenting an integrated package of computer programs for fault tree analysis. In the second part some of the most important applications developed for real systems are presented. (author)

A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations

Background and aim As a result of New Public Management, a number of industrial models of quality management have been implemented in health care, mainly in hospitals. At the same time, the concept of integrated care has been developed within other parts of the health sector. The aim of the article is to discuss the relevance of integrated care for hospitals. Theory and methods The discussion is based on application of a conceptual framework outlining a number of organizational models of integrated care. These models are illustrated in a case study of a Danish university hospital implementing a new organization for improving the patient flows of the hospital. The study of the reorganization is based mainly on qualitative data from individual and focus group interviews. Results The new organization of the university hospital can be regarded as a matrix structure combining a vertical integration of clinical departments with a horizontal integration of patient flows. This structure has elements of both interprofessional and interorganizational integration. A strong focus on teamwork, meetings and information exchange is combined with elements of case management and co-location. Conclusions It seems that integrated care can be a relevant concept for a hospital. Although the organizational models may challenge established professional boundaries and financial control systems, this concept can be a more promising way to improve the quality of care than the industrial models that have been imported into health care. This application of the concept may also contribute to widen the field of integrated care. PMID:24966806

This report presents the final outcomes and products of the project as performed both at the Massachusetts Institute of Technology and subsequently at Pennsylvania State University. The research project can be divided into three main components: methodology development for decision-making under uncertainty, improving the resolution of the electricity sector to improve integrated assessment, and application of these methods to integrated assessment.

This report presents the final outcomes and products of the project as performed at the Massachusetts Institute of Technology. The research project consists of three main components: methodology development for decision-making under uncertainty, improving the resolution of the electricity sector to improve integrated assessment, and application of these methods to integrated assessment. Results in each area is described in the report.

This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

Internet of Things (IoT) has influenced human life where IoT internet connectivity extending from human-to-humans to human-to-machine or machine-to-machine. With this research field, it will be created a technology and concepts that allow humans to communicate with machines for a specific purpose. This research aimed to integrate between application service of the telegram sender with application of e-complaint at a college. With this application, users do not need to visit the Url of the E-compliant application; but, they can be accessed simply by submitting a complaint via Telegram, and then the complaint will be forwarded to the E-complaint Application. From the test results, e-complaint integration with Telegram Bot has been run in accordance with the design. Telegram Bot is made able to provide convenience to the user in this academician to submit a complaint, besides the telegram bot provides the user interaction with the usual interface used by people everyday on their smartphones. Thus, with this system, the complained work unit can immediately make improvements since all the complaints process can be delivered rapidly.

Full Text Available Supply chain vulnerability identification and evaluation are extremely important to mitigate the supply chain risk. We present an integratedmethod to assess the supply chain vulnerability. The potential failure mode of the supply chain vulnerability is analyzed through the SCOR model. Combining the fuzzy theory and the gray theory, the correlation degree of each vulnerability indicator can be calculated and the target improvements can be carried out. In order to verify the effectiveness of the proposed method, we use Kendall’s tau coefficient to measure the effect of different methods. The result shows that the presented method has the highest consistency in the assessment compared with the other two methods.

An important modern method in analytical mechanics for finding the integral, which is called the field-method, is used to research the solution of a differential equation of the first order. First, by introducing an intermediate variable, a more complicated differential equation of the first order can be expressed by two simple differential equations of the first order, then the field-method in analytical mechanics is introduced for solving the two differential equations of the first order. The conclusion shows that the field-method in analytical mechanics can be fully used to find the solutions of a differential equation of the first order, thus a new method for finding the solutions of the first order is provided.

We discuss how the Phase IntegrationMethod (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

We discuss how the Phase IntegrationMethod (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

is fragmented both conceptually and methodologically. Findings suggest that the methods applied in entrepreneurship education research cluster in two groups: 1. quantitative studies of the extent and effect of entrepreneurship education, and 2. qualitative single case studies of different courses and programmes....... It integrates qualitative and quantitative techniques, the use of research teams consisting of insiders (teachers studying their own teaching) and outsiders (research collaborators studying the education) as well as multiple types of data. To gain both in-depth and analytically generalizable studies...... a variety of helpful methods, explore the potential relation between insiders and outsiders in the research process, and discuss how different types of data can be combined. The integrated framework urges researchers to extend investments in methodological efforts and to enhance the in-depth understanding...

Increasingly, organisations are using a Service-Oriented Architecture (SOA) as an approach to Enterprise ApplicationIntegration (EAI), which is required for the automation of business processes. This paper presents an architecture development process which guides the transition from business models to a service-based software architecture. The process is supported by business reference models and patterns. Firstly, the business process models are enhanced with domain model elements, applicat...

The purpose of Enterprise ApplicationIntegration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)

The purpose of Enterprise ApplicationIntegration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)

Full Text Available This paper deals with the application of acoustic emission (AE, which is a part of the non-destructive methods, currently having an extensive application. This method is used for measuring the internal defects of materials. AE has a high potential in further research and development to extend the application of this method even in the field of process engineering. For that matter, it is the most elaborate acoustic emission monitoring in laboratory conditions with regard to external stimuli. The aim of the project is to apply the acoustic emission recording the activity of bees in different seasons. The mission is to apply a new perspective on the behavior of colonies by means of acoustic emission, which collects a sound propagation in the material. Vibration is one of the integral part of communication in the community. Sensing colonies with the support of this method is used for understanding of colonies biological behavior to stimuli clutches, colony development etc. Simulating conditions supported by acoustic emission monitoring system the illustrate colonies activity. Collected information will be used to represent a comprehensive view of the life cycle and behavior of honey bees (Apis mellifera. Use of information about the activities of bees gives a comprehensive perspective on using of acoustic emission in the field of biological research.

In this paper an overview and comparison of the basic concepts and methods behind different system integrational implementations is given, including the DHE, which is based on the coming Healthcare Information Systems Architecture pre-standard HISA, developed by CEN TC251. This standard and the DHE...... (Distributed Healthcare Environment) not only provides highly relevant standards, but also provides an efficient and well structured platform for Healthcare IT Systems....

We develop a method, based on Darboux close-quote s and Liouville close-quote s works, to find first integrals and/or invariant manifolds for a physically relevant class of dynamical systems, without making any assumption on these elements close-quote forms. We apply it to three dynamical systems: Lotka endash Volterra, Lorenz and Rikitake. copyright 1996 American Institute of Physics

In statistics, the Kalman filter is a mathematical method whose purpose is to use a series of measurements observed over time, containing random variations and other inaccuracies, and produce estimates that tend to be closer to the true unknown values than those that would be based on a single measurement alone. This Brief offers developments on Kalman filtering subject to general linear constraints. There are essentially three types of contributions: new proofs for results already established; new results within the subject; and applications in investment analysis and macroeconomics, where th

The exergy method makes it possible to detect and quantify the possibilities of improving thermal and chemical processes and systems. The introduction of the concept ""thermo-ecological cost"" (cumulative consumption of non-renewable natural exergy resources) generated large application possibilities of exergy in ecology. This book contains a short presentation on the basic principles of exergy analysis and discusses new achievements in the field over the last 15 years. One of the most important issues considered by the distinguished author is the economy of non-renewable natural exergy.

Full Text Available The paper presents the application of a hybrid method (blended learning - linking traditional education with on-line education to teach selected problems of mathematical statistics. This includes the teaching of the application of mathematical statistics to evaluate laboratory experimental results. An on-line statistics course was developed to form an integral part of the module ‘methods of statistical evaluation of experimental results’. The course complies with the principles outlined in the Polish National Framework of Qualifications with respect to the scope of knowledge, skills and competencies that students should have acquired at course completion. The paper presents the structure of the course and the educational content provided through multimedia lessons made accessible on the Moodle platform. Following courses which used the traditional method of teaching and courses which used the hybrid method of teaching, students test results were compared and discussed to evaluate the effectiveness of the hybrid method of teaching when compared to the effectiveness of the traditional method of teaching.

''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document

Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integrationmethod, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

Psychodrama as an action method of group psychotherapy is indicated in the treatment of different mental and behavioral disorders. Evaluation studies show very good results in the efficiency of psychodrama treatment, especially in the work with the adolescent population, in crisis situation. Psychodrama, as an action method of group psychotherapy, with its large repertoire of techniques, enable adolescents to make better integration of self. In the first pa...

Highlights: • Short overview of the models included in the ASTEC MCCI module. • MEDICIS/CPA coupled calculations for a generic CANDU6 reactor. • Two cases taking into account different pool/concrete interface models. - Abstract: In case of a hypothetical severe accident in a nuclear power plant, the corium consisting of the molten reactor core and internal structures may flow onto the concrete floor of containment building. This would cause an interaction between the molten corium and the concrete (MCCI), in which the heat transfer from the hot melt to the concrete would cause the decomposition and the ablation of the concrete. The potential hazard of this interaction is the loss of integrity of the containment building and the release of fission products into the environment due to the possibility of a concrete foundation melt-through or containment over-pressurization by the gases produced from the decomposition of the concrete or by the inflammation of combustible gases. In the safety assessment of nuclear power plants, it is necessary to know the consequences of such a phenomenon. The paper presents an example of application of the ASTECv2 code to a generic CANDU6 reactor. This concerns the thermal-hydraulic behaviour of the containment during molten core–concrete interaction in the reactor vault. The calculations were carried out with the help of the MEDICIS MCCI module and the CPA containment module of ASTEC code coupled through a specific prediction–correction method, which consists in describing the heat exchanges with the vault walls and partially absorbent gases. Moreover, the heat conduction inside the vault walls is described. Two cases are presented in this paper taking into account two different heat transfer models at the pool/concrete interface and siliceous concrete. The corium pool configuration corresponds to a homogeneous configuration with a detailed description of the upper crust.

Flexible photonic integrated circuit technology is an emerging field expanding the usage possibilities of photonics, particularly in sensor applications, by enabling the realization of conformable devices and introduction of new alternative production methods. Here, we demonstrate that disposable polymeric photonic integrated circuit devices can be produced in lengths of hundreds of meters by ultra-high volume roll-to-roll methods on a flexible carrier. Attenuation properties of hundreds of individual devices were measured confirming that waveguides with good and repeatable performance were fabricated. We also demonstrate the applicability of the devices for the evanescent wave sensing of ambient refractive index. The production of integrated photonic devices using ultra-high volume fabrication, in a similar manner as paper is produced, may inherently expand methods of manufacturing low-cost disposable photonic integrated circuits for a wide range of sensor applications.

The development of non-intrusive inspection methods for contraband consisting primarily of carbon, nitrogen, oxygen, and hydrogen requires the use of fast neutrons. While most elements can be sufficiently well detected by the thermal neutron capture process, some important ones, e.g., carbon and in particular oxygen, cannot be detected by this process. Fortunately, fast neutrons, with energies above the threshold for inelastic scattering, stimulate relatively strong and specific gamma ray lines from these elements. The main lines are: 6.13 for O, 4.43 for C, and 5.11, 2.31 and 1.64 MeV for N. Accelerator-generated neutrons in the energy range of 7 to 15 MeV are being considered as interrogating radiations in a variety of non-intrusive inspection systems for contraband, from explosives to drugs and from coal to smuggled, dutiable goods. In some applications, mostly for inspection of small items such as luggage, the decision process involves a rudimentary imaging, akin to emission tomography, to obtain the localized concentration of various elements. This technique is called FNA - Fast Neutron Analysis. While this approach offers improvements over the TNA (Thermal Neutron Analysis), it is not applicable to large objects such as shipping containers and trucks. For these challenging applications, a collimated beam of neutrons is rastered along the height of the moving object. In addition, the neutrons are generated in very narrow nanosecond pulses. The point of their interaction inside the object is determined by the time of flight (TOF) method, that is measuring the time elapsed from the neutron generation to the time of detection of the stimulated gamma rays. This technique, called PFNA (Pulsed Fast Neutron Analysis), thus directly provides the elemental, and by inference, the chemical composition of the material at every volume element (voxel) of the object. The various neutron-based techniques are briefly described below. ((orig.))

Full Text Available This study intends to assess the application of Cocktail method in the classification of large vegetation databases. For this purpose, Buxus hyrcana dataset consisted of 442 relevés with 89 species were used and by the modified TWINSPAN. For running the Cocktail method, first primarily classification was done by modified TWINSPAN, and by performing phi analysis in the groups resulted five species were selected which had the highest fidelity value. Then sociological species groups were formed by examining co-occurrence of these 5 species with other species in the database. 21 plant communities belongs to 6 variant, 17 sub associations, 11 associations, 4 alliance, 1 order and 1 class were recognized by assigning 379 releves to the sociological species groups by using logical formulas. Also, 63 releves by the logical formula were not assigned to any sociological species groups, by FPFI index were assigned to the sociological species groups which had the most index value. According to 91% classification agreement with Brown-Blanquet classification and Cocktail classification, we suggest Cocktail method to vegetation scientists as an efficient alternative of Braun-Blanquet method to classify large vegetation databases.

Recent decades have seen rapid advances in automatization processes, supported by modern machines and computers. The result is significant increases in system complexity and state changes, information sources, the need for faster data handling and the integration of environmental influences. Intelligent systems, equipped with a taxonomy of data-driven system identification and machine learning algorithms, can handle these problems partially. Conventional learning algorithms in a batch off-line setting fail whenever dynamic changes of the process appear due to non-stationary environments and external influences. Learning in Non-Stationary Environments: Methods and Applications offers a wide-ranging, comprehensive review of recent developments and important methodologies in the field. The coverage focuses on dynamic learning in unsupervised problems, dynamic learning in supervised classification and dynamic learning in supervised regression problems. A later section is dedicated to applications in which dyna...

The controls group of the SPS and LEP accelerators at CERN, Geneva, uses many different fieldbuses into the controls infrastucture, such as 1553, BITBUS, GPIB, RS232, JBUS, etc. A software package (SL-EQUIP) has been developped to give end users a standardized application program interface (API) to access any equipment connected to any fieldbus. This interface has now been integrated to LabView. We can offer a powerful graphical package, running on HP-UX workstations which treats data from heterogeneous equipment using the great flexibility of LabView. This paper will present SL-EQUIP and LabView, and will then describe some applications using these tools.

Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.

The paper describes an LMFBR nuclear design methodology which has been strongly influenced by the availability of integral data, by the expansion of the differential nuclear data base, by improvements in large nuclear design computer codes, and by the specific reactor under consideration. The accuracy of the nuclear data base has been improved as the result of detailed differential measurements as well as extensive integral testing in the ZPR and ZPPR criticals. Due to the increased interest in radial parfait designs, the applicability of the design data and methods to the analysis of heterogeneous LMFBR systems have been explored. The ability of the design data and methods to predict integral parameters in ZPPR is also discussed

Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integralmethod for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integralmethod, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.

The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

Adopting the most appropriate technology for developing applications on an integrated software system for enterprises, may result in great savings both in cost and hours of work. This paper proposes a research study for the determination of a hierarchy between three SAP (System Applications and Products in Data Processing) technologies. The technologies Web Dynpro -WD, Floorplan Manager - FPM and CRM WebClient UI - CRM WCUI are multi-criteria evaluated in terms of the obtained performances through the implementation of the same web business application. To establish the hierarchy a multi-criteria analysis model that combines the AHP (Analytic Hierarchy Process) and the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods was proposed. This model was built with the help of the SuperDecision software. This software is based on the AHP method and determines the weights for the selected sets of criteria. The TOPSIS method was used to obtain the final ranking and the technologies hierarchy.

There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.

This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.

IEC 61508 requires safety integrity verification for safety related systems to be a necessary procedure in safety life cycle. PFD avg must be calculated to verify the safety integrity level (SIL). Since IEC 61508-6 does not give detailed explanations of the definitions and PFD avg calculations for its examples, it is difficult for common reliability or safety engineers to understand when they use the standard as guidance in practice. A method using reliability block diagram is investigated in this study in order to provide a clear and feasible way of PFD avg calculation and help those who take IEC 61508-6 as their guidance. The method finds mean down times (MDTs) of both channel and voted group first and then PFD avg . The calculated results of various voted groups are compared with those in IEC61508 part 6 and Ref. [Zhang T, Long W, Sato Y. Availability of systems with self-diagnostic components-applying Markov model to IEC 61508-6. Reliab Eng System Saf 2003;80(2):133-41]. An interesting outcome can be realized from the comparison. Furthermore, although differences in MDT of voted groups exist between IEC 61508-6 and this paper, PFD avg of voted groups are comparatively close. With detailed description, the method of RBD presented can be applied to the quantitative SIL verification, showing a similarity of the method in IEC 61508-6

Full Text Available Increasing the degree of integration of hardware components imposes more stringent requirements for the reduction of the concentration of contaminants and oxidation stacking faults in the original silicon wafers with its preservation in the IC manufacturing process cycle. This causes high relevance of the application of gettering in modern microelectronic technology. The existing methods of silicon wafers gettering and the mechanisms of their occurrence are considered.

Simulating antennas around a conducting object is a challenge task in computational electromagnetism, which is concerned with the behaviour of electromagnetic fields. To analyze this model efficiently, an improved integral equation-fast Fourier transform (IE-FFT) algorithm is presented in this paper. The proposed scheme employs two Cartesian grids with different size and location to enclose the antenna and the other object, respectively. On the one hand, IE-FFT technique is used to store matrix in a sparse form and accelerate the matrix-vector multiplication for each sub-domain independently. On the other hand, the mutual interaction between sub-domains is taken as the additional exciting voltage in each matrix equation. By updating integral equations several times, the whole electromagnetic system can achieve a stable status. Finally, the validity of the presented method is verified through the analysis of typical antennas in the presence of a conducting object. (paper)

The objective of this document is to describe different measurement methods, and more particularly to present a software for the processing of obtained results in order to avoid interpretation by the investigator. In a first part, the authors define the parameters of integral and differential linearity, outlines their importance in measurements performed by spectrometry, and describe the use of these parameters. In the second part, they propose various methods of measurement of these linearity parameters, report experimental applications of these methods and compare the obtained results

Between 2008 and 2010, our academic medical center transitioned to electronic provider documentation using a commercial electronic health record system. For attending physicians, one of the most frustrating aspects of this experience was the system's failure to support their existing electronic billing workflow. Because of poor system integration, it was difficult to verify the supporting documentation for each bill and impractical to track whether billable notes had corresponding charges. We developed and deployed in 2011 an integrated billing application called "iCharge" that streamlines clinicians' documentation and billing workflow, and simultaneously populates the inpatient problem list using billing diagnosis codes. Each month, over 550 physicians use iCharge to submit approximately 23,000 professional service charges for over 4,200 patients. On average, about 2.5 new problems are added to each patient's problem list. This paper describes the challenges and benefits of workflow integration across disparate applications and presents an example of innovative software development within a commercial EHR framework.

This document contains two units that examine integral transforms and series expansions. In the first module, the user is expected to learn how to use the unified method presented to obtain Laplace transforms, Fourier transforms, complex Fourier series, real Fourier series, and half-range sine series for given piecewise continuous functions. In…

This volume is an introductory level textbook for partial differential equations (PDE's) and suitable for a one-semester undergraduate level or two-semester graduate level course in PDE's or applied mathematics. Chapters One to Five are organized according to the equations and the basic PDE's are introduced in an easy to understand manner. They include the first-order equations and the three fundamental second-order equations, i.e. the heat, wave and Laplace equations. Through these equations we learn the types of problems, how we pose the problems, and the methods of solutions such as the separation of variables and the method of characteristics. The modeling aspects are explained as well. The methods introduced in earlier chapters are developed further in Chapters Six to Twelve. They include the Fourier series, the Fourier and the Laplace transforms, and the Green's functions. The equations in higher dimensions are also discussed in detail. This volume is application-oriented and rich in examples. Going thr...

An attempt is made to evaluate methods using radiotracers in streamflow measurements. The basic principles of the tracer method are explained and background information given. Radiotracers used in stream discharge measurements are discussed and measurements made by different research workers are described. Problems such as adsorption of the tracer and the mixing length are discussed and the potential use of the radioisotopes as tracer in the routine stream-gauging work is evaluated. It is concluded that, at the present stage of development, radiotracer methods do not seem to be ready for routine use in stream-gauging work, and can only be used in some special cases. For gamma-emitting radioisotopes there are problems related to safety, transport and injection which should be solved. Tritium, though a very attractive tracer in some respects, has the disadvantages of having a relatively long half-life and of disturbing the natural tritium levels in the region. Finally, an attempt is made to define the objectives of the research in the field of application of radioisotopes in hydrometry. (author)

Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...

The present review is aimed at elucidating relatively new aspects of mucoadhesion/mucus interaction and related phenomena that emerged from a Mucoadhesion workshop held in Munster on 2–3 September 2015 as a satellite event of the ICCC 13th—EUCHIS 12th. After a brief outline of the new issues......, the focus is on mucus description, purification, and mucus/mucin characterization, all steps that are pivotal to the understanding of mucus related phenomena and the choice of the correct mucosal model for in vitro and ex vivo experiments, alternative bio/mucomimetic materials are also presented....... Then a selection of preparative techniques and testing methods are described (at molecular as well as micro and macroscale) that may support the pharmaceutical development of mucus interactive systems and assist formulators in the scale-up and industrialization steps. Recent applications of mucoadhesive systems...

The discovery of radioactivity phenomenon occurred almost 100 years ago, in 1896, and constituted the base for new perspectives in many disciplines, including the Earth sciences. The initial works in this field, during the first quarter of the Century, established that the series of radioactive decay of long lifetime Uranium 238, Uranium 235 and Thorium 232 present radioactive isotopes of several elements which are physically and chemically different. The chemical differentiation of the Earth during its evolution has concentrated in the crust the major part of the radioactive materials. The application of radioactive in balance which occur as a consequence of chemical and physical differences, has evolve quickly, and the utilization of natural radioactive isotopes can be detach in two major headings: geologic clocks and tracers. The applications cover a wide spectra of geological, oceanographical, volcanic, hydrological, paleoclimatic and archaeological problems. In this paper, a description of radioactive phenomenon is presented, as well as the chemical and physical properties of the natural radioactive elements, the measurement methods and, finally, some examples of the uses in chronology and as radioactive tracers will be presented, doing an emphasis of some results obtained in Mexico. (Author)

The study was conducted to investigate whether introduction of histamine in enterosoluble capsules produced the same amount of urinary histamine metabolites as that found after application of histamine through a duodeno-jejunal tube. Secondly, to examine whether a histamine-restrictive or a fast ...... conclude that oral administration of enterosoluble capsules is an easy and appropriate method for intestinal histamine challenge. Fast and histamine-restrictive diets are not necessary, but subjects should record unexpected responses in a food and symptom diary.......The study was conducted to investigate whether introduction of histamine in enterosoluble capsules produced the same amount of urinary histamine metabolites as that found after application of histamine through a duodeno-jejunal tube. Secondly, to examine whether a histamine-restrictive or a fast...... all other intervals did not differ significantly between the two challenge regimens. Fast (water only) and histamine-restrictive diet versus non-restrictive diet did not affect the urinary MIAA. MIAA was significantly higher overall during the first 24 h after challenge than in any other fraction. We...

The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products – collected through research groups, monitoring stations and observation cruises – and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment

The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products - collected through research groups, monitoring stations and observation cruises - and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment.

Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc

Capillary electrophoresis is a technique for the separation and analysis of chemical compounds. Techniques adopted from the microchip technology knowledge have led to recent developments of electrophoresis system with integration on microchip. Microchip Capillary Electrophoresis (μCE) systems offer a series of advantages as easy integration for Lab-on-a-chip applications, high performance, portability, speed, minimal solvent and sample requirements. A new technological challenge aims at the development of an economic modular microchip capillary electrophoresis systems using separable and independent units concerning the sensor. In this project we worked on the development of an interchangeable amperometric sensor in order to provide a solution to such electrode passivation and facilitating the use of tailored sensors for specific analyte detection besides. Fluidic chips have been machined from cyclic olefin polymer pallets (Zeonor) using a micro-injection molding machine.

In micro-analytical chemistry and biology applications, optofluidic technology holds great promise for creating efficient lab-on-chip systems where higher levels of integration of different stages on the same platform is constantly addressed. Therefore, in this work the possibility of integrating opto-microfluidic functionalities in lithium niobate (LiNbO3) crystals is presented. In particular, a T-junction droplet generator is directly engraved in a LiNbO3 substrate by means of laser ablation process and optical waveguides are realized in the same material by exploiting the Titanium in-diffusion approach. The coupling of these two stages as well as the realization of holographic gratings in the same substrate will allow creating new compact optical sensor prototypes, where the optical properties of the droplets constituents can be monitored.

This paper reported on an exercise that was undertaken to integrate small-scale wind turbines into the design of an urban high-rise in Portland, Oregon. Wind behaviour in the urban environment is very complex, as the flow of wind over and around buildings often triggers multiple transitions of the air from laminar flow to turbulent. The study documented the process of moving beyond a simplistic approach to a truly informed application of building-integrated wind generation. The 4 key issues addressed in the study process were quantifying the geographical wind regime; predicting wind flow over the building; turbine selection; and pragmatics regarding the design of roof mounting to accommodate structural loads and mitigate vibration. The results suggested that the turbine array should produce in the range of only 1 per cent of the electrical load of the building. 13 refs., 11 figs.

Capillary electrophoresis is a technique for the separation and analysis of chemical compounds. Techniques adopted from the microchip technology knowledge have led to recent developments of electrophoresis system with integration on microchip. Microchip Capillary Electrophoresis ({mu}CE) systems offer a series of advantages as easy integration for Lab-on-a-chip applications, high performance, portability, speed, minimal solvent and sample requirements. A new technological challenge aims at the development of an economic modular microchip capillary electrophoresis systems using separable and independent units concerning the sensor. In this project we worked on the development of an interchangeable amperometric sensor in order to provide a solution to such electrode passivation and facilitating the use of tailored sensors for specific analyte detection besides. Fluidic chips have been machined from cyclic olefin polymer pallets (Zeonor) using a micro-injection molding machine.

Suitable for statisticians, mathematicians, actuaries, and students interested in the problems of insurance and analysis of lifetimes, Statistical Methods with Applications to Demography and Life Insurance presents contemporary statistical techniques for analyzing life distributions and life insurance problems. It not only contains traditional material but also incorporates new problems and techniques not discussed in existing actuarial literature. The book mainly focuses on the analysis of an individual life and describes statistical methods based on empirical and related processes. Coverage ranges from analyzing the tails of distributions of lifetimes to modeling population dynamics with migrations. To help readers understand the technical points, the text covers topics such as the Stieltjes, Wiener, and Itô integrals. It also introduces other themes of interest in demography, including mixtures of distributions, analysis of longevity and extreme value theory, and the age structure of a population. In addi...

A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten.

Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.

The objective of this siting study work is to support DOE in evaluating integrated advanced nuclear plant and ISFSI deployment options in the future. This study looks at several nuclear power plant growth scenarios that consider the locations of existing and planned commercial nuclear power plants integrated with the establishment of consolidated interim spent fuel storage installations (ISFSIs). This research project is aimed at providing methodologies, information, and insights that inform the process for determining and optimizing candidate areas for new advanced nuclear power generation plants and consolidated ISFSIs to meet projected US electric power demands for the future.

The objective of this siting study work is to support DOE in evaluating integrated advanced nuclear plant and ISFSI deployment options in the future. This study looks at several nuclear power plant growth scenarios that consider the locations of existing and planned commercial nuclear power plants integrated with the establishment of consolidated interim spent fuel storage installations (ISFSIs). This research project is aimed at providing methodologies, information, and insights that inform the process for determining and optimizing candidate areas for new advanced nuclear power generation plants and consolidated ISFSIs to meet projected US electric power demands for the future.

decomposition rates was studied in order to evaluate the applicable peat types that can be used in landfill structures. Only minor (BOD/ThOD < 0.4%) biodegradation was observed with compaction peat samples, and the stable state, in which biodegradation stopped, was achieved during a two month period. The manometric respirometric method was also applied for the biodegradation studies in which the effect of the modification of soil properties on biodegradation rates of bio-oils was tested. Modified properties were the nutrient content and the pH of the soil. Fertiliser addition and pH adjustment increased both the BOD/ThOD% values of the model substances and the precision of the measurement. The manometric respirometric method was proved to be an advanced method for simulating biodegradation processes in soil and water media. (orig.)

Full Text Available The paper presents methodologies and specific technologies connected to research activities of LAREA (LAboratorio di Rilievo E Architettura/Laboratory of Survey and Architecture of University of Roma Tor Vergata in cooperation with ITABC (Istituto per le Tecnologie Applicate ai Beni Culturali of CNR. The goal of this case study is to contribute to the 3D digital documentation of Villa Mondragone in Monte Porzio Catone (Rome.In particular, the research is aimed to integrating laser scanning, digital photogrammetry and thopographic survey of Ninfeo, a monumental and scenographic artifact located at one end of the Giardino della Girandola, characterized by an articulated architecture and detailed decorations.

Full Text Available For Enterprise Resource Planning (ERP systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference model's specification. The actual linking of reference and legacy models is done with a methodology for connecting (new business objects with (old legacy systems.

Computational mechanics is that discipline of applied science and engineering devoted to the study of physical phenomena by means of computational methods based on mathematical modeling and simulation, utilizing digital computers. The discipline combines theoretical and applied mechanics, approximation theory, numerical analysis, and computer science. Computational mechanics has had a major impact on engineering analysis and design. When applied to structural mechanics, the discipline is referred to herein as computational structural mechanics. Complex structures being considered by NASA for the 1990's include composite primary aircraft structures and the space station. These structures will be much more difficult to analyze than today's structures and necessitate a major upgrade in computerized structural analysis technology. NASA has initiated a research activity in structural analysis called Computational Structural Mechanics (CSM). The broad objective of the CSM activity is to develop advanced structural analysis technology that will exploit modern and emerging computers, such as those with vector and/or parallel processing capabilities. Here, the current research directions for the Methods and Application Studies Team of the Langley CSM activity are described.

An express method for determination of acidity constants of organic acids, based on the analysis of the integral transmittance vs. pH dependence is developed. The integral value is registered as a photocurrent of photometric device simultaneously with potentiometric titration. The proposed method allows to obtain pKa using only simple and low-cost instrumentation. The optical part of the experimental setup has been optimized through the exclusion of the monochromator device. Thus it only takes 10-15 min to obtain one pKa value with the absolute error of less than 0.15 pH units. Application limitations and reliability of the method have been tested for a series of organic acids of various nature.

We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.

In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.

In this paper, we develop a new boundary integral equation formulation that describes the coupled electro- and hydro-dynamics of a vesicle suspended in a viscous fluid and subjected to external flow and electric fields. The dynamics of the vesicle are characterized by a competition between the elastic, electric and viscous forces on its membrane. The classical Taylor-Melcher leaky-dielectric model is employed for the electric response of the vesicle and the Helfrich energy model combined with local inextensibility is employed for its elastic response. The coupled governing equations for the vesicle position and its transmembrane electric potential are solved using a numerical method that is spectrally accurate in space and first-order in time. The method uses a semi-implicit time-stepping scheme to overcome the numerical stiffness associated with the governing equations.

Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

Reservoir performance and characterization are vital parameters during the development phase of a project. Infill drilling of wells on a uniform spacing, without regard to characterization does not optimize development because it fails to account for the complex nature of reservoir heterogeneities present in many low permeability reservoirs, especially carbonate reservoirs. These reservoirs are typically characterized by: (1) large, discontinuous pay intervals; (2) vertical and lateral changes in reservoir properties; (3) low reservoir energy; (4) high residual oil saturation; and (5) low recovery efficiency. The operational problems they encounter in these types of reservoirs include: (1) poor or inadequate completions and stimulations; (2) early water breakthrough; (3) poor reservoir sweep efficiency in contacting oil throughout the reservoir as well as in the nearby well regions; (4) channeling of injected fluids due to preferential fracturing caused by excessive injection rates; and (5) limited data availability and poor data quality. Infill drilling operations only need target areas of the reservoir which will be economically successful. If the most productive areas of a reservoir can be accurately identified by combining the results of geological, petrophysical, reservoir performance, and pressure transient analyses, then this ''integrated'' approach can be used to optimize reservoir performance during secondary and tertiary recovery operations without resorting to ''blanket'' infill drilling methods. New and emerging technologies such as geostatistical modeling, rock typing, and rigorous decline type curve analysis can be used to quantify reservoir quality and the degree of interwell communication. These results can then be used to develop a 3-D simulation model for prediction of infill locations. The application of reservoir surveillance techniques to identify additional reservoir ''pay'' zones

Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integralmethod in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods. (paper)

Advances in Product Family and Product Platform Design: Methods & Applications highlights recent advances that have been made to support product family and product platform design and successful applications in industry. This book provides not only motivation for product family and product platform design—the “why” and “when” of platforming—but also methods and tools to support the design and development of families of products based on shared platforms—the “what”, “how”, and “where” of platforming. It begins with an overview of recent product family design research to introduce readers to the breadth of the topic and progresses to more detailed topics and design theory to help designers, engineers, and project managers plan, architect, and implement platform-based product development strategies in their companies. This book also: Presents state-of-the-art methods and tools for product family and product platform design Adopts an integrated, systems view on product family and pro...

Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437

The objective of many studies in this area has involved access to a column-sequencing algorithm enabling designers and researchers alike to generate a wide range of sequences in a broad search space, and be as mathematically and as automated as possible for programing purposes and with good generality. In the present work an algorithm previously developed by the authors, called the matrix method, has been developed much further. The new version of the algorithm includes thermally coupled, thermodynamically equivalent, intensified, simultaneous heat and mass integrated and divided-wall column sequences which are of gross application and provide vast saving potential both on capital investment, operating costs and energy usage in industrial applications. To demonstrate the much wider searchable space now accessible, a three component separation has been thoroughly examined as a case study, always resulting in an integrated sequence being proposed as the optimum.

Introduction Mixed-methods methodology, as the name suggests refers to mixing of elements of both qualitative and quantitative methodologies in a single study. In the past decade, mixed-methods methodology has gained popularity among healthcare researchers as it promises to bring together the strengths of both qualitative and quantitative approaches. Methodology A number of mixed-methods designs are available in the literature and the four most commonly used designs in healthcare research are: the convergent parallel design, the embedded design, the exploratory design, and the explanatory design. Each has its own unique advantages, challenges and procedures and selection of a particular design should be guided by the research question. Guidance on designing, conducting and reporting mixed-methods research is available in the literature, so it is advisable to adhere to this to ensure methodological rigour. When to use it is best suited when the research questions require: triangulating findings from different methodologies to explain a single phenomenon; clarifying the results of one method using another method; informing the design of one method based on the findings of another method, development of a scale/questionnaire and answering different research questions within a single study. Two case studies have been presented to illustrate possible applications of mixed-methods methodology. Limitations Possessing the necessary knowledge and skills to undertake qualitative and quantitative data collection, analysis, interpretation and integration remains the biggest challenge for researchers conducting mixed-methods studies. Sequential study designs are often time consuming, being in two (or more) phases whereas concurrent study designs may require more than one data collector to collect both qualitative and quantitative data at the same time.

This article discusses the use of application-specific integrated circuits (ASICs) in nuclear plant safety systems. ASICs have certain advantages over software-based systems because they can be simple enough to be thoroughly tested, and they can be tailored to replace existing equipment. An architecture to replace a pressurized water reactor pressure channel trip is presented. Methods of implementing digital algorithms are also discussed

Most of PETROBRAS offshore oil and gas production is conveyed through Flexible Pipes (FPs) used for gathering, exporting and importing functions. PETROBRAS is the greatest user of FPs worldwide and, due to the complexity of the FP, a composite structure having many steel and polymeric layers and end fittings, it implies a huge number of possible failure mechanisms, much more than those expected for steel pipes. The use of FP demands a special approach over all life cycle phases, from the basic engineering up to the operation/reuse/decommission, by evaluating the application feasibility together with potential failures. This paper accounts some of PETROBRAS experience on FPs, mainly a current approach on their integrity and planned measures in order to assure production and prevent accidents, based on the most relevant failure mechanisms. The preventive actions includes review on failures and their causes and, consequently, improvement on specifications, FP design verification, prototype qualification, inspection and monitoring of integrity key parameters during installation and operation, as well as, maintenance. A FPs Company Integrity Directives and Database will allow a continuous improvement of field systems reliability through to a periodic assessment of performances and feedback to activities for the whole FP life cycle. (author)

Micro-resonators (MR) have become a key element for integrated optical sensors due to their integration capability and their easy fabrication with low cost polymer materials. Nowadays, there is a growing need on MRs as highly sensitive and selective functions especially in the areas of food and health. The context of this work is to implement and study integrated micro-ring resonators devoted to sensing applications. They are fabricated by processing SU8 polymer as core layer and PMATRIFE polymer as lower cladding layer. The refractive index of the polymers and of the waveguide structure as a function of the wavelength is presented. Using these results, a theoretical study of the coupling between ring and straight waveguides has been undertaken in order to define the MR design. Sub-micronic gaps of 0.5 μm to 1 μm between the ring and the straight waveguides have been successfully achieved with UV (i-lines) photolithography. Different superstrates such as air, water, and aqueous solutions with glucose at different concentrations have been studied. First results show a good normalized transmission contrast of 0.98, a resonator quality factor around 1.5 × 104 corresponding to a coupling ratio of 14.7%, and ring propagation losses around 5 dB/cm. Preliminary sensing experiments have been performed for different concentrations of glucose; a sensitivity of 115 ± 8 nm/RIU at 1550 nm has been obtained with this couple of polymers.

The benefits derived from application of the 8-cm mercury electron bombardment ion thruster were assessed. Two specific spacecraft missions were studied. A thruster was tested to provide additional needed information on its efflux characteristics and interactive effects. A Users Manual was then prepared describing how to integrate the thruster for auxiliary propulsion on geosynchronous satellites. By incorporating ion engines on an advanced communications mission, the weight available for added payload increases by about 82 kg (181 lb) for a 100 kg (2200 lb) satellite which otherwise uses electrothermal hydrazine. Ion engines can be integrated into a high performance propulsion module that is compatible with the multimission modular spacecraft and can be used for both geosynchronous and low earth orbit applications. The low disturbance torques introduced by the ion engines permit accurate spacecraft pointing with the payload in operation during thrusting periods. The feasibility of using the thruster's neutralizer assembly for neutralization of differentially charged spacecraft surfaces at geosynchronous altitude was demonstrated during the testing program.

Facility planning is concerned with the design, layout, and accommodation of people, machines and activities of a system. Most of the researchers try to investigate the production area layout and the related facilities. However, few of them try to investigate the relationship between the production space and its relationship with service departments. The aim of this research to is to integrate different approaches in order to evaluate, analyse and select the best facilities planning method that able to explain the relationship between the production area and other supporting departments and its effect on human efforts. To achieve the objective of this research two different approaches have been integrated: Apple’s layout procedure as one of the effective tools in planning factories, ELECTRE method as one of the Multi Criteria Decision Making methods (MCDM) to minimize the risk of getting poor facilities planning. Dalia industries have been selected as a case study to implement our integration the factory have been divided two main different area: the whole facility (layout A), and the manufacturing area (layout B). This article will be concerned with the manufacturing area layout (Layout B). After analysing the data gathered, the manufacturing area was divided into 10 activities. There are five factors that the alternative were compared upon which are: Inter department satisfactory level, total distance travelled for workers, total distance travelled for the product, total time travelled for the workers, and total time travelled for the product. Three different layout alternatives have been developed in addition to the original layouts. Apple’s layout procedure was used to study and evaluate the different alternatives layouts, the study and evaluation of the layouts was done by calculating scores for each of the factors. After obtaining the scores from evaluating the layouts, ELECTRE method was used to compare the proposed alternatives with each other and with

Full Text Available The decisions taken in rehabilitation planning for the urban water networks will have a long lasting impact on the functionality and quality of future services provided by urban infrastructure. These decisions can be assisted by different approaches ranging from linear depreciation for estimating the economic value of the network over using a deterioration model to assess the probability of failure or the technical service life to sophisticated multi-criteria decision support systems. Subsequently, the aim of this paper is to compare five available multi-criteria decision-making (MCDM methods (ELECTRE, AHP, WSM, TOPSIS, and PROMETHEE for the application in an integrated rehabilitation management scheme for a real world case study and analyze them with respect to their suitability to be used in integrated asset management of water systems. The results of the different methods are not equal. This occurs because the chosen score scales, weights and the resulting distributions of the scores within the criteria do not have the same impact on all the methods. Independently of the method used, the decision maker must be familiar with its strengths but also weaknesses. Therefore, in some cases, it would be rational to use one of the simplest methods. However, to check for consistency and increase the reliability of the results, the application of several methods is encouraged.

In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)

The nodal methodology is based on retaining a higher a higher degree of analyticity in the process of deriving the discrete-variable equations compared to conventional numerical methods. As a result, extensive numerical testing of nodal methods developed for a wide variety of partial differential equations and comparison of the results to conventional methods have established the superior accuracy of nodal methods on coarse meshes. Moreover, these tests have shown that nodal methods are more computationally efficient than finite difference and finite-element methods in the sense that they require shorter CPU times to achieve comparable accuracy in the solutions. However, nodal formalisms and the final discrete-variable equations they produce are, in general, more complicated than their conventional counterparts. This, together with anticipated difficulties in applying the transverse-averaging procedure in curvilinear coordinates, has limited the applications of nodal methods, so far, to Cartesian geometry, and with additional approximations to hexagonal geometry. In this paper the authors report recent progress in deriving and numerically implementing a nodal integralmethod (NIM) for solving the neutron diffusion equation in cylindrical r-z geometry. Also, presented are comparisons of numerical solutions to two test problems with those obtained by the Exterminator-2 code, which indicate the superior accuracy of the nodal integralmethod solutions on much coarser meshes

A Space-Point Energy-Group integral transport theory method (SPEG) is developed and applied to the local and global calculations of the Yugoslav RA reactor. Compared to other integral transport theory methods, the SPEG distinguishes by (1) the arbitrary order of the polynomial, (2) the effective determination of integral parameters through point flux values, (3) the use of neutron balance condition. as a posterior measure of the accuracy of the calculation and (4) the elimination of the subdivisions- into zones, in realistic cases. In addition, different direct (collision probability) and indirect (Monte Carlo) approaches to integral transport theory have been investigated and Some effective acceleration procedures introduced. The study was performed on three test problems in plane and cylindrical geometry, as well as on the nine-region cell of the RA reactor. In particular, the limitations of the integral transport theory including its non-applicability to optically large material regions and to global reactor calculations were examined. The proposed strictly multipoint approach, avoiding the subdivision into zones and groups, seems to provide a good starting point to overcome these limitations of the integral transport theory. (author)

Full Text Available Justin T Seil, Thomas J WebsterLaboratory for Nanomedicine Research, School of Engineering, Brown University, Providence, RI, USAAbstract: The need for novel antibiotics comes from the relatively high incidence of bacterial infection and the growing resistance of bacteria to conventional antibiotics. Consequently, new methods for reducing bacteria activity (and associated infections are badly needed. Nanotechnology, the use of materials with dimensions on the atomic or molecular scale, has become increasingly utilized for medical applications and is of great interest as an approach to killing or reducing the activity of numerous microorganisms. While some natural antibacterial materials, such as zinc and silver, possess greater antibacterial properties as particle size is reduced into the nanometer regime (due to the increased surface to volume ratio of a given mass of particles, the physical structure of a nanoparticle itself and the way in which it interacts with and penetrates into bacteria appears to also provide unique bactericidal mechanisms. A variety of techniques to evaluate bacteria viability, each with unique advantages and disadvantages, has been established and must be understood in order to determine the effectiveness of nanoparticles (diameter ≤100 nm as antimicrobial agents. In addition to addressing those techniques, a review of select literature and a summary of bacteriostatic and bactericidal mechanisms are covered in this manuscript.Keywords: nanomaterial, nanoparticle, nanotechnology, bacteria, antibacterial, biofilm

The need for novel antibiotics comes from the relatively high incidence of bacterial infection and the growing resistance of bacteria to conventional antibiotics. Consequently, new methods for reducing bacteria activity (and associated infections) are badly needed. Nanotechnology, the use of materials with dimensions on the atomic or molecular scale, has become increasingly utilized for medical applications and is of great interest as an approach to killing or reducing the activity of numerous microorganisms. While some natural antibacterial materials, such as zinc and silver, possess greater antibacterial properties as particle size is reduced into the nanometer regime (due to the increased surface to volume ratio of a given mass of particles), the physical structure of a nanoparticle itself and the way in which it interacts with and penetrates into bacteria appears to also provide unique bactericidal mechanisms. A variety of techniques to evaluate bacteria viability, each with unique advantages and disadvantages, has been established and must be understood in order to determine the effectiveness of nanoparticles (diameter ≤ 100 nm) as antimicrobial agents. In addition to addressing those techniques, a review of select literature and a summary of bacteriostatic and bactericidal mechanisms are covered in this manuscript.

A research program for integrating artificial intelligence (AI) techniques with tools and methods used for aircraft flight control system design, development, and implementation is discussed. The application of the AI methods for the development and implementation of the logic software which operates with the control mode panel (CMP) of an aircraft is presented. The CMP is the pilot control panel for the automatic flight control system of a commercial-type research aircraft of Langley Research Center's Advanced Transport Operating Systems (ATOPS) program. A mouse-driven color-display emulation of the CMP, which was developed with AI methods and used to test the AI software logic implementation, is discussed. The operation of the CMP was enhanced with the addition of a display which was quickly developed with AI methods. The display advises the pilot of conditions not satisfied when a mode does not arm or engage. The implementation of the CMP software logic has shown that the time required to develop, implement, and modify software systems can be significantly reduced with the use of the AI methods.

The lack of established standards to describe and annotate biological assays and screening outcomes in the domain of drug and chemical probe discovery is a severe limitation to utilize public and proprietary drug screening data to their maximum potential. We have created the BioAssay Ontology (BAO) project (http://bioassayontology.org) to develop common reference metadata terms and definitions required for describing relevant information of low-and high-throughput drug and probe screening assays and results. The main objectives of BAO are to enable effective integration, aggregation, retrieval, and analyses of drug screening data. Since we first released BAO on the BioPortal in 2010 we have considerably expanded and enhanced BAO and we have applied the ontology in several internal and external collaborative projects, for example the BioAssay Research Database (BARD). We describe the evolution of BAO with a design that enables modeling complex assays including profile and panel assays such as those in the Library of Integrated Network-based Cellular Signatures (LINCS). One of the critical questions in evolving BAO is the following: how can we provide a way to efficiently reuse and share among various research projects specific parts of our ontologies without violating the integrity of the ontology and without creating redundancies. This paper provides a comprehensive answer to this question with a description of a methodology for ontology modularization using a layered architecture. Our modularization approach defines several distinct BAO components and separates internal from external modules and domain-level from structural components. This approach facilitates the generation/extraction of derived ontologies (or perspectives) that can suit particular use cases or software applications. We describe the evolution of BAO related to its formal structures, engineering approaches, and content to enable modeling of complex assays and integration with other ontologies and

This book is a pedagogical presentation of the application of spectral and pseudospectral methods to kinetic theory and quantum mechanics. There are additional applications to astrophysics, engineering, biology and many other fields. The main objective of this book is to provide the basic concepts to enable the use of spectral and pseudospectral methods to solve problems in diverse fields of interest and to a wide audience. While spectral methods are generally based on Fourier Series or Chebychev polynomials, non-classical polynomials and associated quadratures are used for many of the applications presented in the book. Fourier series methods are summarized with a discussion of the resolution of the Gibbs phenomenon. Classical and non-classical quadratures are used for the evaluation of integrals in reaction dynamics including nuclear fusion, radial integrals in density functional theory, in elastic scattering theory and other applications. The subject matter includes the calculation of transport coefficient...

Full Text Available This paper presents a new 3D bottom-up packing technology for integrating a chip, an induction coil, and interconnections for flexible wireless biomedical applications. Parylene was used as a flexible substrate for the bottom-up embedding of the chip, insulation layer, interconnection, and inductors to form a flexible wireless biomedical microsystem. The system can be implanted on or inside the human body. A 50-μm gold foil deposited through laser micromachining by using a picosecond laser was used as an inductor to yield a higher quality factor than that yielded by thickness-increasing methods such as the fold-and-bond method or thick-metal electroplating method at the operation frequency of 1 MHz. For system integration, parylene was used as a flexible substrate, and the contact pads and connections between the coil and chip were generated using gold deposition. The advantage of the proposed process can integrate the chip and coil vertically to generate a single biocompatible system in order to reduce required area. The proposed system entails the use of 3D integrated circuit packaging concepts to integrate the chip and coil. The results validated the feasibility of this technology.

KEPCO E and C participated in the NAPS (Nuclear Application Programs) development project for BNPP (Barakah Nuclear Power Plant) simulator. The 3KEY MASTER™ was adopted for this project, which is comprehensive simulation platform software developed by WSC (Western Services Corporation) for the development, and control of simulation software. The NAPS based on actual BNPP project was modified in order to meet specific requirements for nuclear power plant simulators. Considerations regarding software design for BNPP simulator and interfaces between the 3KM platform and application programs are discussed. The repeatability is one of functional requirements for nuclear power plant simulators. In order to migrate software from actual plants to simulators, software functions for storing and retrieving plant conditions and program variables should be implemented. In addition, software structures need to be redesigned to meet the repeatability, and source codes developed for actual plants would have to be optimized to reflect simulator’s characteristics as well. The synchronization is an important consideration to integrate external application programs into the 3KM simulator.

Full Text Available Background and aim: As a result of New Public Management, a number of industrial models of quality management have been implemented in health care, mainly in hospitals. At the same time, the concept of integrated care has been developed within other parts of the health sector. The aim of the article is to discuss the relevance of integrated care for hospitals.Theory and methods: The discussion is based on application of a conceptual framework outlining a number of organizational models of integrated care. These models are illustrated in a case study of a Danish university hospital implementing a new organization for improving the patient flows of the hospital. The study of the reorganization is based mainly on qualitative data from individual and focus group interviews.Results: The new organization of the university hospital can be regarded as a matrix structure combining a vertical integration of clinical departments with a horizontal integration of patient flows. This structure has elements of both interprofessional and interorganizational integration. A strong focus on teamwork, meetings and information exchange is combined with elements of case management and co-location.Conclusions: It seems that integrated care can be a relevant concept for a hospital. Although the organizational models may challenge established professional boundaries and financial control systems, this concept can be a more promising way to improve the quality of care than the industrial models that have been imported into health care. This application of the concept may also contribute to widen the field of integrated care.

Full Text Available Background and aim: As a result of New Public Management, a number of industrial models of quality management have been implemented in health care, mainly in hospitals. At the same time, the concept of integrated care has been developed within other parts of the health sector. The aim of the article is to discuss the relevance of integrated care for hospitals. Theory and methods: The discussion is based on application of a conceptual framework outlining a number of organizational models of integrated care. These models are illustrated in a case study of a Danish university hospital implementing a new organization for improving the patient flows of the hospital. The study of the reorganization is based mainly on qualitative data from individual and focus group interviews. Results: The new organization of the university hospital can be regarded as a matrix structure combining a vertical integration of clinical departments with a horizontal integration of patient flows. This structure has elements of both interprofessional and interorganizational integration. A strong focus on teamwork, meetings and information exchange is combined with elements of case management and co-location. Conclusions: It seems that integrated care can be a relevant concept for a hospital. Although the organizational models may challenge established professional boundaries and financial control systems, this concept can be a more promising way to improve the quality of care than the industrial models that have been imported into health care. This application of the concept may also contribute to widen the field of integrated care.

Decision makers throughout the world are introducing risk and market forces in the electric power industry to lower costs and improve services. Incentive based regulation (IBR), which replaces cost of service ratemaking with an approach that divorces costs from revenues, exposes the utility to the risk of profits or losses depending on their performance. Regulators also are allowing for competition within the industry, most notably in the wholesale market and possibly in the retail market. Two financial approaches that incorporate risk in resource planning are evaluated: risk adjusted discount rates (RADR) and options theory (OT). These two complementary approaches are an improvement over the standard present value revenue requirement (PVRR). However, each method has some important limitations. By correctly using RADR and OT and understanding their limitations, decision makers can improve their ability to value risk properly in power plant projects and integrated resource plans. (Author)

This paper presents an integratedmethod for designing airfoil families of large wind turbine blades. For a given rotor diameter and tip speed ratio, the optimal airfoils are designed based on the local speed ratios. To achieve high power performance at low cost, the airfoils are designed...... with an objective of high Cp and small chord length. When the airfoils are obtained, the optimum flow angle and rotor solidity are calculated which forms the basic input to the blade design. The new airfoils are designed based on the previous in-house airfoil family which were optimized at a Reynolds number of 3...... million. A novel shape perturbation function is introduced to optimize the geometry on the existing airfoils and thus simplify the design procedure. The viscos/inviscid code Xfoil is used as the aerodynamic tool for airfoil optimization where the Reynolds number is set at 16 million with a free...

This paper presents an integratedmethod for designing airfoil families of large wind turbine blades. For a given rotor diameter and a tip speed ratio, optimal airfoils are designed based on the local speed ratios. To achieve a high power performance at low cost, the airfoils are designed...... with the objectives of high Cp and small chord length. When the airfoils are obtained, the optimum flow angle and rotor solidity are calculated which forms the basic input to the blade design. The new airfoils are designed based on a previous in-house designed airfoil family which was optimized at a Reynolds number...... of 3 million. A novel shape perturbation function is introduced to optimize the geometry based on the existing airfoils which simplifies the design procedure. The viscous/inviscid interactive code XFOIL is used as the aerodynamic tool for airfoil optimization at a Reynolds number of 16 million...

Instant messaging applications has already taken the place of traditional Short Messaging Service (SMS) and Multimedia Messaging Service (MMS) due to their popularity and usage easement they provide. Users of instant messaging applications are able to send both text and audio messages, different types of attachments such as photos, videos, contact information to their contacts in real time. Because of instant messaging applications use internet instead of Short Message Service Technical Reali...

Full Text Available Organizations are increasingly implementing multiple Management System Standards (M SSs and considering managing the related Management Systems (MSs as a single system.The aim of this paper is to analyze if methods us ed to integrate standardized MSs condition the level of integration of those MSs. A descriptive methodology has been applied to 343 Spanish organizations registered to, at least, ISO 9001 and ISO 14001. Seven groups of these organizations using different combinations of methods have been analyzed Results show that these organizations have a high level of integration of their MSs. The most common method used, was the process map. Organizations using a combination of different methods achieve higher levels of integration than those using a single method. However, no evidence has been found to confirm the relationship between the method used and the integration level achieved.

This paper presents a mathematical method developed for investigating a class of systems of infinite-dimensional integral equations which have application in statistical mechanics. Necessary and sufficient conditions are obtained for the uniqueness and bifurcation of the solution of this class of systems of equations. Problems of equilibrium statistical mechanics are considered on the basis of this method

Full Text Available For the development of the construction industry, the construction of data era is approaching, BIM (building information model with the actual needs of the construction industry has been widely used as a building information clan system software, different software for the practical application of different maturity, through the expert scoring method for the application of BIM technology maturity index mark, establish the evaluation index system, using PCA - Q clustering algorithm for the evaluation index system of classification, comprehensive evaluation in combination with the Choquet integral on the classification of evaluation index system, to achieve a reasonable assessment of the application of BIM technology maturity index. To lay a foundation for the future development of BIM Technology in various fields of construction, at the same time provides direction for the comprehensive application of BIM technology.

Rapid proliferation of mobile technologies in social and healthcare spaces create an opportunity for advancement in research and clinical practice. The application of mobile, personalized technology in healthcare, referred to as mHealth, has not yet become routine in toxicology. However, key features of our practice environment, such as frequent need for remote evaluation, unreliable historical data from patients, and sensitive subject matter, make mHealth tools appealing solutions in comparison to traditional methods that collect retrospective or indirect data. This manuscript describes the features, uses, and costs associated with several of common sectors of mHealth research including wearable biosensors, ingestible biosensors, head-mounted devices, and social media applications. The benefits and novel challenges associated with the study and use of these applications are then discussed. Finally, opportunities for further research and integration are explored with a particular focus on toxicology-based applications.

The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

The gaps in knowledge and existing challenges in precisely describing the land surface process make it critical to represent the massive soil moisture data visually and mine the data for further research.This article introduces a comprehensive soil moisture assimilation data analysis system, which is instructed by tools of C#, IDL, ArcSDE, Visual Studio 2008 and SQL Server 2005. The system provides integrated service, management of efficient graphics visualization and analysis of land surface data assimilation. The system is not only able to improve the efficiency of data assimilation management, but also comprehensively integrate the data processing and analysis tools into GIS development environment. So analyzing the soil moisture assimilation data and accomplishing GIS spatial analysis can be realized in the same system. This system provides basic GIS map functions, massive data process and soil moisture products analysis etc. Besides,it takes full advantage of a spatial data engine called ArcSDE to effeciently manage, retrieve and store all kinds of data. In the system, characteristics of temporal and spatial pattern of soil moiture will be plotted. By analyzing the soil moisture impact factors, it is possible to acquire the correlation coefficients between soil moisture value and its every single impact factor. Daily and monthly comparative analysis of soil moisture products among observations, simulation results and assimilations can be made in this system to display the different trends of these products. Furthermore, soil moisture map production function is realized for business application.

Continuous medication monitoring is essential for successful management of heart failure patients. Experiences with the recently established heart failure network HerzMobil Tirol show that medication monitoring limited to heart failure specific drugs could be insufficient, in particular for general practitioners. Additionally, some patients are confused about monitoring only part of their prescribed drugs. Sometimes medication will be changed without informing the responsible physician. As part of the upcoming Austrian electronic health record system ELGA, the eMedication system will collect prescription and dispensing data of drugs and these data will be accessible to authorized healthcare professionals on an inter-institutional level. Therefore, we propose two concepts on integrated medication management in mHealth applications that integrate ELGA eMedication and closed-loop mHealth-based telemonitoring. As a next step, we will implement these concepts and analyze--in a feasibility study--usability and practicability as well as legal aspects with respect to automatic data transfer from the ELGA eMedication service.

Following a comprehensive literature review, this paper looks at analysis of geohazard using remote sensing information. This paper compares the basic types and methods of change detection, explores the basic principle of common methods and makes an respective analysis of the characteristics and shortcomings of the commonly used methods in the application of geohazard. Using the earthquake in JieGu as a case study, this paper proposes a geohazard change detection methodintegrating RS and GIS. When detecting the pre-earthquake and post-earthquake remote sensing images at different phases, it is crucial to set an appropriate threshold. The method adopts a self-adapting determination algorithm for threshold. We select a training region which is obtained after pixel information comparison and set a threshold value. The threshold value separates the changed pixel maximum. Then we apply the threshold value to the entire image, which could also make change detection accuracy maximum. Finally, we output the result to the GIS system to make change analysis. The experimental results show that this method of geohazard change detection based on integrating remote sensing and GIS information has higher accuracy with obvious advantages compared with the traditional methods

Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.

We present an improved form of the integration technique known as NDIM (negative dimensional integrationmethod), which is a powerful tool in the analytical evaluation of Feynman diagrams. Using this technique we study a φ 3 +φ 4 theory in D=4-2ε dimensions, considering generic topologies of L loops and E independent external momenta, and where the propagator powers are arbitrary. The method transforms the Schwinger parametric integral associated to the diagram into a multiple series expansion, whose main characteristic is that the argument contains several Kronecker deltas which appear naturally in the application of the method, and which we call diagram presolution. The optimization we present here consists in a procedure that minimizes the series multiplicity, through appropriate factorizations in the multinomials that appear in the parametric integral, and which maximizes the number of Kronecker deltas that are generated in the process. The solutions are presented in terms of generalized hypergeometric functions, obtained once the Kronecker deltas have been used in the series. Although the technique is general, we apply it to cases in which there are 2 or 3 different energy scales (masses or kinematic variables associated to the external momenta), obtaining solutions in terms of a finite sum of generalized hypergeometric series 1 and 2 variables respectively, each of them expressible as ratios between the different energy scales that characterize the topology. The main result is a method capable of solving Feynman integrals, expressing the solutions as hypergeometric series of multiplicity (n-1), where n is the number of energy scales present in the diagram

Past civilian N.S. Savanna (80 MW t h), Otto-Hahn (38 MW t h) and Mutsu (36 MW t h) experienced stable operations under various sea conditions to prove that the reactors were stable and suitable for ship power source. Russian nuclear icebreakers such as Lenin (90 MW t h x2), Arukuchika (150 MW t h x2) showed stable operations under severe conditions during navigation on the Arctic Sea. These reactor systems, however, should be made even more efficient, compact, safe and long life, because adding support from the land may not be available on the sea. In order to meet these requirements, a compact, simple, safe and innovative integral system named Naval Application Vessel Integral System (NAVIS) is being designed with such novel concepts as a primary liquid metal coolant, a secondary supercritical carbon dioxide (SCO 2 ) coolant, emergency reactor cooling system, safety containment and so on. NAVIS is powered by Battery Optimized Reactor Integral System (BORIS). An ultra-small, ultra-long-life, versatile-purpose, fast-spectrum reactor named BORIS is being developed for a multi-purpose application such as naval power source, electric power generation in remote areas, seawater desalination, and district heating. NAVIS aims to satisfy special environment on the sea with BORIS using the lead (Pb) coolant in the primary system. NAVIS improves the economical efficiency resorting to the SCO 2 Brayton cycle for the secondary system. BORIS is operated by natural circulation of Pb without needing pumps. The reactor power is autonomously controlled by load-following operation without an active reactivity control system, whereas B 4 C based shutdown control rod is equipped for an emergency condition. SCO 2 promises a high power conversion efficiency of the recompression Brayton cycle due to its excellent compressibility reducing the compression work at the bottom of the cycle and to a higher density than helium or steam decreasing the component size. Therefore, the SCO 2 Brayton

This book brings together developments in spatial analysis techniques, including spatial statistics, econometrics, and spatial visualization, and applications to fields such as regional studies, transportation and land use, population and health.

Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.

In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.

A two-channel synchronous receiver circuit for optical instrumentation applications has been designed and implemented. Each receiver channel comprises a.o. transimpedance preamplifier, voltage amplifiers, programmable feedback networks, and a synchronous detector. The function of the channel is to extract the slowly varying information carrying signal from a modulated carrier which is accompanied by relatively high levels of noise. As a whole, the channel can be characterized as a narrow band filter around the frequency of interest. Medical applications include arterial oxygen saturation (SaO2) measurement and dental pulp vitality measurement. In both cases, two optical signals with different frequencies are received by a single photodiode. The measured performance of the optical receiver shows its suitability for the above mentioned applications. Therefore the circuit will be used in a small sized, battery-operated sensor prototype to test the sensing method in a clinical environment. Other applications include the signal processing of optical position-sensitive detectors. A summary of measured receiver channel performance: input reduced noise current spectral density between 0.20 and 0.30 pA/(root)Hz at all relevant frequencies, total programmable channel transimpedance between 7 M(Omega) and 500 M(Omega) , lower -3 dB frequency of at least 50 Hz, upper -3 dB frequency of 40 kHz, maximum voltage swing at the demodulator output of 2.4 V.

Radionuclide methods are one of the most modern methods of functional diagnostics of diseases of the cardio-vascular system that requires the use of mathematical methods of processing and analysis of data obtained during the investigation. Study is carried out by means of one-photon emission computed tomography (SPECT). Mathematical methods and software for SPECT data processing are developed. This software allows defining physiologically meaningful indicators for cardiac studies

In this paper, we introduce a new family of 10-step linear multistep methods for the integration of orbital problems. The new methods are constructed by adopting a new methodology which improves the phase-lag characteristics by vanishing both the phase-lag function and its first derivatives at a specific frequency. The efficiency of the new family of methods is proved via error analysis and numerical applications.

Preventive safety functions help drivers avoid or mitigate accidents. No quantitative methods have been available to evaluate the safety impact of these systems. This paper describes a framework for the assessment of preventive and active safety functions, which integrates procedures for technical

''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

CLIL (Content and Language Integrated Learning) is a modern form of interdisciplinary teaching linking teaching subject and language teaching. The work builds on the thesis, in which I created and Science worksheets in English language. The aim of the thesis was to create, publish and verify in practice a complex CD with interactive materials for teaching zoology and supplementary materials for teachers introducing CLIL into the classroom. The teaching materials include a lot of pictures, gam...

Full Text Available Augmented reality has became an useful tool in many areas from space exploration to military applications. Although used theoretical principles are well known for almost a decade, the augmented reality is almost exclusively used in high budget solutions with a special hardware. However, in last few years we could see rising popularity of many projects focused on deployment of the augmented reality on dif­ferent mobile devices. Our article is aimed on developers who consider development of an augmented reality application for the mainstream market. Such developers will be forced to keep the application price, therefore also the development price, at reasonable level. Usage of existing image processing software library could bring a signiﬁcant cut-down of the development costs. In the theoretical part of the article is presented an overview of the augmented reality application structure. Further, an approach for selection appropriate library as well as the review of the existing software libraries focused in this area is described. The last part of the article out­lines our implementation of key parts of the augmented reality application using the OpenCV library.

A front-end application specified integrated circuit (ASIC) is developed with a wide dynamic range amplifier (WDAMP) to read-out signals from a photo-sensor like a photodiode. The WDAMP ASIC consists of a charge sensitive preamplifier, four wave-shaping circuits with different amplification factors and Wilkinson-type analog-to-digital converter (ADC). To realize a wider range, the integrating capacitor in the preamplifier can be changed from 4 pF to 16 pF by a two-bit switch. The output of a preamplifier is shared by the four wave-shaping circuits with four gains of 1, 4, 16 and 64 to adapt the input range of ADC. A 0.25-μm CMOS process (of UMC electronics CO., LTD) is used to fabricate the ASIC with four-channels. The dynamic range of four orders of magnitude is achieved with the maximum range over 20 pC and the noise performance of 0.46 fC + 6.4×10{sup −4} fC/pF. -- Highlights: ► A front-end ASIC is developed with a wide dynamic range amplifier. ► The ASIC consists of a CSA, four wave-shaping circuits and pulse-height-to-time converters. ► The dynamic range of four orders of magnitude is achieved with the maximum range over 20 pC.

Full Text Available Quality of service (QoS is an important performance indicator for Web applications and bandwidth is a key factor affecting QoS. Current methods use network protocols or ports to schedule bandwidth, which require tedious manual configurations or modifications of the underlying network. Some applications use dynamic ports and the traditional port-based bandwidth control methods cannot deal with them. A new QoS control method based on local bandwidth scheduling is proposed, which can schedule bandwidth at application level in a user-transparent way and it does not require tedious manual configurations. Experimental results indicate that the new method can effectively improve the QoS for applications, and it can be easily integrated into current Web applications without the need to modify the underlying network.

Full Text Available We expand the theory of probability tomography to the integration of different geophysical datasets. The aim of the new method is to improve the information quality using a conjoint occurrence probability function addressed to highlight the existence of common sources of anomalies. The new method is tested on gravity, magnetic and self-potential datasets collected in the volcanic area of Mt. Vesuvius (Naples, and on gravity and dipole geoelectrical datasets collected in the volcanic area of Mt. Etna (Sicily. The application demonstrates that, from a probabilistic point of view, the integrated analysis can delineate the signature of some important volcanic targets better than the analysis of the tomographic image of each dataset considered separately.

Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.

Faced with the development of bioinformatics, high-throughput genomic technology have enabled biology to enter the era of big data. [1] Bioinformatics is an interdisciplinary, including the acquisition, management, analysis, interpretation and application of biological information, etc. It derives from the Human Genome Project. The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets.[2]. This paper analyzes and compares various algorithms of machine learning and their applications in bioinformatics.

The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data

Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

We present two important results for the kinetic theory and numerical simulation of warm plasmas: 1) We provide a metriplectic formulation of collisional electrostatic gyrokinetics that is fully consistent with the First and Second Laws of Thermodynamics. 2) We provide a metriplectic temporal and velocity-space discretization for the particle phase-space Landau collision integral that satisfies the conservation of energy, momentum, and particle densities to machine precision, as well as guarantees the existence of numerical H-theorem. The properties are demonstrated algebraically. These two result have important implications: 1) Numerical methods addressing the Vlasov-Maxwell-Landau system of equations, or its reduced gyrokinetic versions, should start from a metriplectic formulation to preserve the fundamental physical principles also at the discrete level. 2) The plasma physics community should search for a metriplectic reduction theory that would serve a similar purpose as the existing Lagrangian and Hamiltonian reduction theories do in gyrokinetics. The discovery of metriplectic formulation of collisional electrostatic gyrokinetics is strong evidence in favor of such theory and, if uncovered, the theory would be invaluable in constructing reduced plasma models. Supported by U.S. DOE Contract Nos. DE-AC02-09-CH11466 (EH) and DE-AC05-06OR23100 (JWB) and by European Union's Horizon 2020 research and innovation Grant No. 708124 (MK).

The Saint-Venant torsion problem for homogeneous shafts with simply or multiply-connected regions has received a great deal of attention in the past. However, because of the mathematical difficulties inherent in the problem, very few problems of torsion of shafts with composite cross sections have been solved analytically. Muskhelishvili (1963) studied the torsion problem for shafts with cross sections having several solid inclusions surrounded by an elastic material. The problem of a circular shaft reinforced by a non-concentric round inclusion, a rectangular shaft composed of two rectangular parts made of different materials were solved. In this paper, a boundary integral equation method, which can be used to solve problems more complex than those considered by Katsikadelis et. al., is developed. Square shaft with two dissimilar rectangular parts, square shaft with a square inclusion are solved and the results compared with those given in the reference cited above. Finally, a square shaft composed of two rectangular parts with circular inclusion is solved. (orig./GL)

Reservoir rock typing is the most important part of all reservoir modelling. For integrated reservoir rock typing, static and dynamic properties need to be combined, but sometimes these two are incompatible. The failure is due to the misunderstanding of the crucial parameters that control the dynamic behaviour of the reservoir rock and thus selecting inappropriate methods for defining static rock types. In this study, rock types were defined by combining the SCAL data with the rock properties, particularly rock fabric and pore types. First, air-displacing-water capillary pressure curues were classified because they are representative of fluid saturation and behaviour under capillary forces. Next the most important rock properties which control the fluid flow and saturation behaviour (rock fabric and pore types) were combined with defined classes. Corresponding petrophysical properties were also attributed to reservoir rock types and eventually, defined rock types were compared with relative permeability curves. This study focused on representing the importance of the pore system, specifically pore types in fluid saturation and entrapment in the reservoir rock. The most common tests in static rock typing, such as electrofacies analysis and porosity–permeability correlation, were carried out and the results indicate that these are not appropriate approaches for reservoir rock typing in carbonate reservoirs with a complicated pore system. (paper)

Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

Spectral emissivity is a critical material's thermos-physical property for heat design and radiation thermometry. A prototype instrument based upon an integral blackbody method was developed to measure material's spectral emissivity above 1000 °. The system was implemented with an optimized commercial variable-high-temperature blackbody, a high speed linear actuator, a linear pyrometer, and an in-house designed synchronization circuit. A sample was placed in a crucible at the bottom of the blackbody furnace, by which the sample and the tube formed a simulated blackbody which had an effective total emissivity greater than 0.985. During the measurement, the sample was pushed to the end opening of the tube by a graphite rod which was actuated through a pneumatic cylinder. A linear pyrometer was used to monitor the brightness temperature of the sample surface through the measurement. The corresponding opto-converted voltage signal was fed and recorded by a digital multi-meter. A physical model was proposed to numerically evaluate the temperature drop along the process. Tube was discretized as several isothermal cylindrical rings, and the temperature profile of the tube was measurement. View factors between sample and rings were calculated and updated along the whole pushing process. The actual surface temperature of the sample at the end opening was obtained. Taking advantages of the above measured voltage profile and the calculated true temperature, spectral emissivity under this temperature point was calculated.

Research is being conducted to address aging of the containment pressure boundary in light-water reactor plants. Objectives of this research are to (1) understand the significant factors relating to corrosion occurrence, efficacy of inspection, and structural capacity reduction of steel containments and of liners of concrete containments; (2) provide the U.S. Nuclear Regulatory Commission (USNRC) reviewers a means of establishing current structural capacity margins or estimating future residual structural capacity margins for steel containments and concrete containments as limited by liner integrity; and (3) provide recommendations, as appropriate, on information to be requested of licensees for guidance that could be utilized by USNRC reviewers in assessing the seriousness of reported incidences of containment degradation. Activities include development of a degradation assessment methodology; reviews of techniques and methods for inspection and repair of containment metallic pressure boundaries; evaluation of candidate techniques for inspection of inaccessible regions of containment metallic pressure boundaries; establishment of a methodology for reliability-based condition assessments of steel containments and liners; and fragility assessments of steel containments with localized corrosion

This report presents an interpolation method for the solution of the Boltzmann transport equation. The method is based on a flux synthesis technique using two reference-point solutions. The equation for the interpolated solution results in a Volterra integral equation which is proved to have a unique solution. As an application of the present method, tritium breeding ratio is calculated for a typical D-T fusion reactor system. The result is compared to that of a variational technique

In recent years, Delphi method has been widely applied in traditional Chinese medicine (TCM) clinical research. This article analyzed the present application situation of Delphi method in TCM clinical research, and discussed some problems presented in the choice of evaluation method, classification of observation indexes and selection of survey items. On the basis of present application of Delphi method, the author analyzed the method on questionnaire making, selection of experts, evaluation of observation indexes and selection of survey items. Furthermore, the author summarized the steps of application of Delphi method in TCM clinical research.

Full Text Available Proper control of distillation columns requires estimating some key variables that are challenging to measure online (such as compositions, which are usually estimated using inferential models. Commonly used inferential models include latent variable regression (LVR techniques, such as principal component regression (PCR, partial least squares (PLS, and regularized canonical correlation analysis (RCCA. Unfortunately, measured practical data are usually contaminated with errors, which degrade the prediction abilities of inferential models. Therefore, noisy measurements need to be filtered to enhance the prediction accuracy of these models. Multiscale filtering has been shown to be a powerful feature extraction tool. In this work, the advantages of multiscale filtering are utilized to enhance the prediction accuracy of LVR models by developing an integrated multiscale LVR (IMSLVR modeling algorithm that integrates modeling and feature extraction. The idea behind the IMSLVR modeling algorithm is to filter the process data at different decomposition levels, model the filtered data from each level, and then select the LVR model that optimizes a model selection criterion. The performance of the developed IMSLVR algorithm is illustrated using three examples, one using synthetic data, one using simulated distillation column data, and one using experimental packed bed distillation column data. All examples clearly demonstrate the effectiveness of the IMSLVR algorithm over the conventional methods.

This book is addressed to persons who, without being professionals in applied mathematics, are often faced with the problem of numerically solving differential equations. In each of the first three chapters a definite class of methods is discussed for the solution of the initial value problem for ordinary differential equations: multistep methods; one-step methods; and piecewise perturbation methods. The fourth chapter is mainly focussed on the boundary value problems for linear second-order equations, with a section devoted to the Schroedinger equation. In the fifth chapter the eigenvalue problem for the radial Schroedinger equation is solved in several ways, with computer programs included. (Auth.)

Experiments were conducted to investigate method interferences, residual stability, regulated DBP formation, and a water chemistry model associated with the use of Dichlor & Trichlor in drinking water.

This paper addresses the advisability of adapting and applying management and Integrated Logistic engineering techniques to nuclear power plants instead of using more traditional maintenance management methods. It establishes a historical framework showing the origins of integrated approaches based on traditional logistic support concepts, their phases and the real results obtained in the aeronautic world where they originated. It reviews the application of integrated management philosophy, and logistic support and engineering analysis techniques regarding Availability, Reliability and Maintainability (ARM) and shows their inter dependencies in different phases of the system's life (Design, Development and Operation). It describes how these techniques are applied to nuclear power plant operation, their impact on plant availability and the optimisation of maintenance and replacement plans. The paper analyses the need for data (type and volume), which will have to be collected, and the different tools to manage such data. It examines the different CALS tools developed by EA for engineering and for logistic management. It also explains the possibility of using these tools for process and data operations through the INTERNET. It also focuses on the qualities of some simple examples of possible applications, and how they would be used in the framework of Integrated Logistic Support (ILS). (Author)

Aim of process integrationmethods is to increase the efficiency of industrial processes by using pinch analysis combined with process design methods. In this context, appropriate integrated utilities offer promising opportunities to reduce energy consumption, operating costs and pollutants emissions. Energy integrationmethods are able to integrate any type of predefined utility, but so far there is no systematic approach to generate potential utilities models based on their technology limit...

Implementing reactor protection systems (RPS) or other engineering safeguard systems with application specific integrated circuits (ASICs) offers significant advantages over conventional analog or software based RPSs. Conventional analog RPSs suffer from setpoints drifts and large numbers of discrete analog electronics, hardware logic, and relays which reduce reliability because of the large number of potential failures of components or interconnections. To resolve problems associated with conventional discrete RPSs and proposed software based RPS systems, a hybrid analog and digital RPS system implemented with custom ASICs is proposed. The actual design of the ASIC RPS resembles a software based RPS but the programmable software portion of each channel is implemented in a fixed digital logic design including any input variable computations. Set point drifts are zero as in proposed software systems, but the verification and validation of the computations is made easier since the computational logic an be exhaustively tested. The functionality is assured fixed because there can be no future changes to the ASIC without redesign and fabrication. Subtle error conditions caused by out of order evaluation or time dependent evaluation of system variables against protection criteria are eliminated by implementing all evaluation computations in parallel for simultaneous results. On- chip redundancy within each RPS channel and continuous self-testing of all channels provided enhanced assurance that a particular channel is available and faults are identified as soon as possible for corrective actions. The use of highly integrated ASICs to implement channel electronics rather than the use of discrete electronics greatly reduces the total number of components and interconnections in the RPS to further increase system reliability. A prototype ASIC RPS channel design and the design environment used for ASIC RPS systems design is discussed

Considering Poisson random measures as the driving sources for stochastic (partial) differential equations allows us to incorporate jumps and to model sudden, unexpected phenomena. By using such equations the present book introduces a new method for modeling the states of complex systems perturbed by random sources over time, such as interest rates in financial markets or temperature distributions in a specific region. It studies properties of the solutions of the stochastic equations, observing the long-term behavior and the sensitivity of the solutions to changes in the initial data. The authors consider an integration theory of measurable and adapted processes in appropriate Banach spaces as well as the non-Gaussian case, whereas most of the literature only focuses on predictable settings in Hilbert spaces. The book is intended for graduate students and researchers in stochastic (partial) differential equations, mathematical finance and non-linear filtering and assumes a knowledge of the required integrati...

Demonstrates the application of DSM to solve a broad range of operator equations The dynamical systems method (DSM) is a powerful computational method for solving operator equations. With this book as their guide, readers will master the application of DSM to solve a variety of linear and nonlinear problems as well as ill-posed and well-posed problems. The authors offer a clear, step-by-step, systematic development of DSM that enables readers to grasp the method's underlying logic and its numerous applications. Dynamical Systems Method and Applications begins with a general introduction and

Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

Purpose: This paper aims to complement an earlier article (2010) in "Journal of European Industrial Training" in which the description and theory bases of scenistic methods were presented. This paper also offers a description of scenistic methods and information on theory bases. However, the main thrust of this paper is to describe, give suggested…

Purpose: To develop an improved kinetic-spectrophotometric procedure for the determination of metronidazole (MNZ) in pharmaceutical formulations. Methods: The method is based on oxidation reaction of MNZ by hydrogen peroxide in the presence of Fe(II) ions at pH 4.5 (acetate buffer). The reaction was monitored ...

ISSN: 1596-5996 (print); 1596-9827 (electronic) ... Methods: The method is based on oxidation reaction of MNZ by hydrogen peroxide ... optimum operating conditions for reagent concentrations and temperature were ... 1-yl) ethanol] is an amebicide, antiprotozoal and .... The dependence of reaction rate on concentration of.

This study examines the performance of integrationmethods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integrationmethods considering various time steps and fixed-number of iterations for the iterative integrationmethod. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

This volume focuses on Time-Correlated Single Photon Counting (TCSPC), a powerful tool allowing luminescence lifetime measurements to be made with high temporal resolution, even on single molecules. Combining spectrum and lifetime provides a "fingerprint" for identifying such molecules in the presence of a background. Used together with confocal detection, this permits single-molecule spectroscopy and microscopy in addition to ensemble measurements, opening up an enormous range of hot life science applications such as fluorescence lifetime imaging (FLIM) and measurement of Förster Resonant Energy Transfer (FRET) for the investigation of protein folding and interaction. Several technology-related chapters present both the basics and current state-of-the-art, in particular of TCSPC electronics, photon detectors and lasers. The remaining chapters cover a broad range of applications and methodologies for experiments and data analysis, including the life sciences, defect centers in diamonds, super-resolution micr...

Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

The system of singular integral equation which is obtained from the integro-differential form of the linear transport equation as a result of Placzec lemma is solved. Application are given using the exit distributions and the infinite medium Green's function. The same theoretical results are also obtained with the use of the singular eigenfunction of the method of elementary solutions

The design and fabrication techniques for microelectromechanical systems (MEMS) and nanodevices are progressing rapidly. However, due to material and process flow incompatibilities in the fabrication of sensors, actuators and electronic circuitry, a final packaging step is often necessary to integrate all components of a heterogeneous microsystem on a common substrate. Robotic pick-and-place, although accurate and reliable at larger scales, is a serial process that downscales unfavorably due to stiction problems, fragility and sheer number of components. Self-assembly, on the other hand, is parallel and can be used for device sizes ranging from millimeters to nanometers. In this review, the state-of-the-art in methods and applications for self-assembly is reviewed. Methods for assembling three-dimensional (3D) MEMS structures out of two-dimensional (2D) ones are described. The use of capillary forces for folding 2D plates into 3D structures, as well as assembling parts onto a common substrate or aggregating parts to each other into 2D or 3D structures, is discussed. Shape matching and guided assembly by magnetic forces and electric fields are also reviewed. Finally, colloidal self-assembly and DNA-based self-assembly, mainly used at the nanoscale, are surveyed, and aspects of theoretical modeling of stochastic assembly processes are discussed. (topical review)

One of the most crucial needs in the design and implementation of an underground waste isolation facility is a reliable method for the detection and characterization of fractures in zones away from boreholes or subsurface workings. Geophysical methods may represent a solution to this problem. If fractures represent anomalies in the elastic properties or conductive properties of the rocks, then the seismic and electrical techniques may be useful in detecting and characterizing fracture properties. 7 refs., 3 figs

In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .

In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described

OBJECTIVE: To assess four different chemical surface conditioning methods for ceramic material before bracket bonding, and their impact on shear bond strength and surface integrity at debonding. METHODS: Four experimental groups (n = 13) were set up according to the ceramic conditioning method: G1 = 37% phosphoric acid etching followed by silane application; G2 = 37% liquid phosphoric acid etching, no rinsing, followed by silane application; G3 = 10% hydrofluoric acid etching alone; and G4 = 10% hydrofluoric acid etching followed by silane application. After surface conditioning, metal brackets were bonded to porcelain by means of the Transbond XP system (3M Unitek). Samples were submitted to shear bond strength tests in a universal testing machine and the surfaces were later assessed with a microscope under 8 X magnification. ANOVA/Tukey tests were performed to establish the difference between groups (α= 5%). RESULTS: The highest shear bond strength values were found in groups G3 and G4 (22.01 ± 2.15 MPa and 22.83 ± 3.32 Mpa, respectively), followed by G1 (16.42 ± 3.61 MPa) and G2 (9.29 ± 1.95 MPa). As regards surface evaluation after bracket debonding, the use of liquid phosphoric acid followed by silane application (G2) produced the least damage to porcelain. When hydrofluoric acid and silane were applied, the risk of ceramic fracture increased. CONCLUSIONS: Acceptable levels of bond strength for clinical use were reached by all methods tested; however, liquid phosphoric acid etching followed by silane application (G2) resulted in the least damage to the ceramic surface. PMID:26352845

In this Letter a new functional integral representation for classical dynamics is introduced. It is achieved by rewriting the Liouville picture in terms of bosonic creation-annihilation operators and utilizing the standard derivation of functional integrals for dynamical quantities in the coherent states representation. This results in a new class of functional integrals which are exactly solvable and can be found explicitly when the underlying classical systems are integrable

Full Text Available In recent years, mobile target localization for enclosed environments has been a growing interest. In this paper, we have proposed a fuzzy adaptive tightly-coupled integration (FATCI method for positioning and tracking applications using strapdown inertial navigation system (SINS and wireless sensor network (WSN. The wireless signal outage and severe multipath propagation of WSN often influence the accuracy of measured distance and lead to difficulties with the WSN positioning. Note also that the SINS are known for their drifted error over time. Using as a base the well-known loosely-coupled integrationmethod, we have built a tightly-coupled integrated positioning system for SINS/WSN based on the measured distances between anchor nodes and mobile node. The measured distance value of WSN is corrected with a least squares regression (LSR algorithm, with the aim of decreasing the systematic error for measured distance. Additionally, the statistical covariance of measured distance value is used to adjust the observation covariance matrix of a Kalman filter using a fuzzy inference system (FIS, based on the statistical characteristics. Then the tightly-coupled integration model can adaptively adjust the confidence level for measurement according to the different measured accuracies of distance measurements. Hence the FATCI system is achieved using SINS/WSN. This innovative approach is verified in real scenarios. Experimental results show that the proposed positioning system has better accuracy and stability compared with the loosely-coupled and traditional tightly-coupled integration model for WSN short-term failure or normal conditions.

The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of Ricinus communis, commonly known as the castor plant. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatography-mass spectrometry (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid, as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods, starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid, or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method, independent of the seed source. In particular, the abundance of mannose, arabinose, fucose, ricinoleic acid, and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation than would be possible using a single analytical method.

Storm surges and their associated coastal inundation are major coastal marine hazards, both in tropical and extra-tropical areas. As sea level rises due to climate change, the impact of storm surges and associated extreme flooding may increase in low-lying countries and harbour cities. Of the 33 world cities predicted to have at least 8 million people by 2015, at least 21 of them are coastal including 8 of the 10 largest. They are highly vulnerable to coastal hazards including storm surges. Coastal inundation forecasting and warning systems depend on the crosscutting cooperation of different scientific disciplines and user communities. An integrated approach to storm surge, wave, sea-level and flood forecasting offers an optimal strategy for building improved operational forecasts and warnings capability for coastal inundation. The Earth Observation (EO) information from satellites has demonstrated high potential to enhanced coastal hazard monitoring, analysis, and forecasting; the GOCE geoid data can help calculating accurate positions of tide gauge stations within the GLOSS network. ASAR images has demonstrated usefulness in analysing hydrological situation in coastal zones with timely manner, when hazardous events occur. Wind speed and direction, which is the key parameters for storm surge forecasting and hindcasting, can be derived by using scatterometer data. The current issue is, although great deal of useful EO information and application tools exist, that sufficient user information on EO data availability is missing and that easy access supported by user applications and documentation is highly required. Clear documentation on the user requirements in support of improved storm surge forecasting and risk assessment is also needed at the present. The paper primarily addresses the requirements for data, models/technologies, and operational skills, based on the results from the recent Scientific and Technical Symposium on Storm Surges (www

In this paper, we first introduce the concept of fractional quantum integral with general kernels, which generalizes several types of fractional integrals known from the literature. Then we give more general versions of some integral inequalities for this operator, thus generalizing some previous results obtained by many researchers.2,8,25,29,30,36

Full Text Available In teacher education, general pedagogical and psychological knowledge is often taught separately from the teaching subject itself, potentially leading to inert knowledge. In an experimental study with 69 mathematics student teachers, we tested the benefits of fostering the integration of pedagogical content knowledge and general pedagogical and psychological knowledge with respect to knowledge application. Integration was fostered either by integrating the contents or by prompting the learners to integrate separately-taught knowledge. Fostering integration, as compared to a separate presentation without integration help, led to more applicable pedagogical and psychological knowledge and greater simultaneous application of pedagogical and psychological knowledge and pedagogical content knowledge. The advantages of fostering knowledge integration were not moderated by the student teachers’ prior knowledge or working memory capacity. A disadvantage of integrating different knowledge types referred to increased learning times.

The present paper presents a number of methods for a comprehensive assessment of energy systems, discusses their merits and limitations, and provides some result examples. The areas addressed include environmental impacts, risks and economic aspects. Three step Life Cycle Analysis (LCA) has been used to analyse environmental impacts. Transparent and consistent inventories were developed for electricity generation (nine fuel cycles) and for heating systems. The results, which include gaseous and liquid emissions as well as non-energetic resources such as land depreciation, cover average, currently operating systems in the UCPTE network and in Switzerland. Examples of comparisons of heating systems and electricity generation systems, with respect to their contributions to such impact classes as greenhouse effect, acidification and photosmog, are provided. Major gaps exist with respect to the assessment of the severe accidents potential within the different energy systems. When analysing the objective risks due to severe accidents two approaches are employed, i.e. direct use of past experience and applications of Probabilistic Safety Assessment (PSA). Progress with respect to extended knowledge about accidents that occurred in the past and in the context of uses of PSA for external costs calculations is reported. Limitations of historical data and modelling issues are discussed along with the role of risk aversion and current attempts to account for it. (author) 10 figs., 1 tab

Full Text Available Full-face tunnelling machines were used for the tunnel construction in Slovakia for boring of the exploratory galleries of highwaytunnels Branisko and Višňové-Dubná skala. A monitoring system of boring process parameters was installed on the tunnelling machinesand the acquired outcomes were processed by several theoretical approaches. Method IKONA was developed for the determination ofchanges in the rock mass strength characteristics in the line of exploratory gallery. Individual geological sections were evaluated bythe descriptive statistics and the TBM performance was evaluated by the fuzzy method. The paper informs on the procedure of the designof fuzzy models and their verification.

Silver nanoparticles size makes wide range of new applications in various fields of industry. Synthesis of noble metal nanoparticles for applications such as catalysis, electronics, optics, environmental and biotechnology is an area of constant interest. Two main methods for Silver nanoparticles are the physical and chemical methods. The problem with these methods is absorption of toxic substances onto them. Green synthesis approaches overcome this limitation. Silver nanoparticles size makes wide range of new applications in various fields of industry. This article summarizes exclusively scalable techniques and focuses on strengths, respectively, limitations with respect to the biomedical applicability and regulatory requirements concerning silver nanoparticles.